WorldWideScience

Sample records for high dimensional vector

  1. Multi-perspective views of students’ difficulties with one-dimensional vector and two-dimensional vector

    Science.gov (United States)

    Fauzi, Ahmad; Ratna Kawuri, Kunthi; Pratiwi, Retno

    2017-01-01

    Researchers of students’ conceptual change usually collects data from written tests and interviews. Moreover, reports of conceptual change often simply refer to changes in concepts, such as on a test, without any identification of the learning processes that have taken place. Research has shown that students have difficulties with vectors in university introductory physics courses and high school physics courses. In this study, we intended to explore students’ understanding of one-dimensional and two-dimensional vector in multi perspective views. In this research, we explore students’ understanding through test perspective and interviews perspective. Our research study adopted the mixed-methodology design. The participants of this research were sixty students of third semester of physics education department. The data of this research were collected by testand interviews. In this study, we divided the students’ understanding of one-dimensional vector and two-dimensional vector in two categories, namely vector skills of the addition of one-dimensionaland two-dimensional vector and the relation between vector skills and conceptual understanding. From the investigation, only 44% of students provided correct answer for vector skills of the addition of one-dimensional and two-dimensional vector and only 27% students provided correct answer for the relation between vector skills and conceptual understanding.

  2. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  3. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  4. Inverse Operation of Four-dimensional Vector Matrix

    OpenAIRE

    H J Bao; A J Sang; H X Chen

    2011-01-01

    This is a new series of study to define and prove multidimensional vector matrix mathematics, which includes four-dimensional vector matrix determinant, four-dimensional vector matrix inverse and related properties. There are innovative concepts of multi-dimensional vector matrix mathematics created by authors with numerous applications in engineering, math, video conferencing, 3D TV, and other fields.

  5. On the existence of n-dimensional indecomposable vector bundles

    International Nuclear Information System (INIS)

    Tan Xiaojiang.

    1991-09-01

    Let X be an arbitrary smooth irreducible complex projective curve of genus g with g ≥ 4. In this paper we extend the existence theorem of special divisors to high dimensional indecomposable vector bundles. We give a necessary and sufficient condition on the existence of n-dimensional indecomposable vector bundles E with deg(E) = d, dimH 0 (X,E) ≥ h. We also determine under what condition the set of all such vector bundles will be finite and how many elements it contains. (author). 9 refs

  6. Vector (two-dimensional) magnetic phenomena

    International Nuclear Information System (INIS)

    Enokizono, Masato

    2002-01-01

    In this paper, some interesting phenomena were described from the viewpoint of two-dimensional magnetic property, which is reworded with the vector magnetic property. It shows imperfection of conventional magnetic property and some interested phenomena were discovered, too. We found magnetic materials had the strong nonlinearity both magnitude and spatial phase due to the relationship between the magnetic field strength H-vector and the magnetic flux density B-vector. Therefore, magnetic properties should be defined as the vector relationship. Furthermore, the new Barukhausen signal was observed under rotating flux. (Author)

  7. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  8. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm `boundary separated checkerboard sweep method` appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it`s similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  9. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    International Nuclear Information System (INIS)

    Okumura, Keisuke

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm 'boundary separated checkerboard sweep method' appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it's similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  10. High-Dimensional Metrics in R

    OpenAIRE

    Chernozhukov, Victor; Hansen, Chris; Spindler, Martin

    2016-01-01

    The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...

  11. Vectorization of three-dimensional neutron diffusion code CITATION

    International Nuclear Information System (INIS)

    Harada, Hiroo; Ishiguro, Misako

    1985-01-01

    Three-dimensional multi-group neutron diffusion code CITATION has been widely used for reactor criticality calculations. The code is expected to be run at a high speed by using recent vector supercomputers, when it is appropriately vectorized. In this paper, vectorization methods and their effects are described for the CITATION code. Especially, calculation algorithms suited for vectorization of the inner-outer iterative calculations which spend most of the computing time are discussed. The SLOR method, which is used in the original CITATION code, and the SOR method, which is adopted in the revised code, are vectorized by odd-even mesh ordering. The vectorized CITATION code is executed on the FACOM VP-100 and VP-200 computers, and is found to run over six times faster than the original code for a practical-scale problem. The initial value of the relaxation factor and the number of inner-iterations given as input data are also investigated since the computing time depends on these values. (author)

  12. Semilogarithmic Nonuniform Vector Quantization of Two-Dimensional Laplacean Source for Small Variance Dynamics

    Directory of Open Access Journals (Sweden)

    Z. Peric

    2012-04-01

    Full Text Available In this paper high dynamic range nonuniform two-dimensional vector quantization model for Laplacean source was provided. Semilogarithmic A-law compression characteristic was used as radial scalar compression characteristic of two-dimensional vector quantization. Optimal number value of concentric quantization domains (amplitude levels is expressed in the function of parameter A. Exact distortion analysis with obtained closed form expressions is provided. It has been shown that proposed model provides high SQNR values in wide range of variances, and overachieves quality obtained by scalar A-law quantization at same bit rate, so it can be used in various switching and adaptation implementations for realization of high quality signal compression.

  13. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    International Nuclear Information System (INIS)

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  14. Vectorized Matlab Codes for Linear Two-Dimensional Elasticity

    Directory of Open Access Journals (Sweden)

    Jonas Koko

    2007-01-01

    Full Text Available A vectorized Matlab implementation for the linear finite element is provided for the two-dimensional linear elasticity with mixed boundary conditions. Vectorization means that there is no loop over triangles. Numerical experiments show that our implementation is more efficient than the standard implementation with a loop over all triangles.

  15. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  16. An evaluation method of cross-type H-coil angle for accurate two-dimensional vector magnetic measurement

    International Nuclear Information System (INIS)

    Maeda, Yoshitaka; Todaka, Takashi; Shimoji, Hiroyasu; Enokizono, Masato; Sievert, Johanes

    2006-01-01

    Recently, two-dimensional vector magnetic measurement has become popular and many researchers concerned with this field have attracted to develop more accurate measuring systems and standard measurement systems. Because the two-dimensional vector magnetic property is the relationship between the magnetic flux density vector B and the magnetic field strength vector H , the most important parameter is those components. For the accurate measurement of the field strength vector, we have developed an evaluation apparatus, which consists of a standard solenoid coil and a high-precision turntable. Angle errors of a double H-coil (a cross-type H-coil), which is wound one after the other around a former, can be evaluated with this apparatus. The magnetic field strength is compensated with the measured angle error

  17. Vector current scattering in two dimensional quantum chromodynamics

    International Nuclear Information System (INIS)

    Fleishon, N.L.

    1979-04-01

    The interaction of vector currents with hadrons is considered in a two dimensional SU(N) color gauge theory coupled to fermions in leading order in an N -1 expansion. After giving a detailed review of the model, various transition matrix elements of one and two vector currents between hadronic states were considered. A pattern is established whereby the low mass currents interact via meson dominance and the highly virtual currents interact via bare quark-current couplings. This pattern is especially evident in the hadronic contribution to inelastic Compton scattering, M/sub μν/ = ∫ dx e/sup iq.x/ , which is investigated in various kinematic limits. It is shown that in the dual Regge region of soft processes the currents interact as purely hadronic systems. Modification of dimensional counting rules is indicated by a study of a large angle scattering analog. In several hard inclusive nonlight cone processes, parton model ideas are confirmed. The impulse approximation is valid in a Bjorken--Paschos-like limit with very virtual currents. A Drell--Yan type annihilation mechanism is found in photoproduction of massive lepton pairs, leading to identification of a parton wave function for the current. 56 references

  18. Vector calculus in non-integer dimensional space and its applications to fractal media

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-02-01

    We suggest a generalization of vector calculus for the case of non-integer dimensional space. The first and second orders operations such as gradient, divergence, the scalar and vector Laplace operators for non-integer dimensional space are defined. For simplification we consider scalar and vector fields that are independent of angles. We formulate a generalization of vector calculus for rotationally covariant scalar and vector functions. This generalization allows us to describe fractal media and materials in the framework of continuum models with non-integer dimensional space. As examples of application of the suggested calculus, we consider elasticity of fractal materials (fractal hollow ball and fractal cylindrical pipe with pressure inside and outside), steady distribution of heat in fractal media, electric field of fractal charged cylinder. We solve the correspondent equations for non-integer dimensional space models.

  19. New Multigrid Method Including Elimination Algolithm Based on High-Order Vector Finite Elements in Three Dimensional Magnetostatic Field Analysis

    Science.gov (United States)

    Hano, Mitsuo; Hotta, Masashi

    A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

  20. Two-dimensional gauge model with vector U(1) and axial-vector U(1) symmetries

    International Nuclear Information System (INIS)

    Watabiki, Y.

    1989-01-01

    We have succeeded in constructing a two-dimensional gauge model with both vector U(1) and axial-vector U(1) symmetries. This model is exactly solvable. The Schwinger term vanishes in this model as a consequence of the above symmetries, and negative-norm states appear. However, the norms of physical states are always positive semidefinite due to the gauge symmetries

  1. A structural modification of the two dimensional fuel behaviour analysis code FEMAXI-III with high-speed vectorized operation

    International Nuclear Information System (INIS)

    Yanagisawa, Kazuaki; Ishiguro, Misako; Yamazaki, Takashi; Tokunaga, Yasuo.

    1985-02-01

    Though the two-dimensional fuel behaviour analysis code FEMAXI-III has been developed by JAERI in form of optimized scalar computer code, the call for more efficient code usage generally arized from the recent trends like high burn-up and load follow operation asks the code into further modification stage. A principal aim of the modification is to transform the already implemented scalar type subroutines into vectorized forms to make the programme structure efficiently run on high-speed vector computers. The effort of such structural modification has been finished on a fair way to success. The benchmarking two tests subsequently performed to examine the effect of the modification led us the following concluding remarks: (1) In the first benchmark test, comparatively high-burned three fuel rods that have been irradiated in HBWR, BWR, and PWR condition are prepared. With respect to all cases, a net computing time consumed in the vectorized FEMAXI is approximately 50 % less than that consumed in the original one. (2) In the second benchmark test, a total of 26 PWR fuel rods that have been irradiated in the burn-up ranges of 13-30 MWd/kgU and subsequently power ramped in R2 reactor, Sweden is prepared. In this case the code is purposed to be used for making an envelop of PCI-failure threshold through 26 times code runs. Before coming to the same conclusion, the vectorized FEMAXI-III consumed a net computing time 18 min., while the original FEMAXI-III consumed a computing time 36 min. respectively. (3) The effects obtained from such structural modification are found to be significantly attributed to saving a net computing time in a mechanical calculation in the vectorized FEMAXI-III code. (author)

  2. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    Science.gov (United States)

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  3. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang

    2017-10-27

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  4. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang; Tong, Tiejun; Genton, Marc G.

    2017-01-01

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  5. Desingularization strategies for three-dimensional vector fields

    CERN Document Server

    Torres, Felipe Cano

    1987-01-01

    For a vector field #3, where Ai are series in X, the algebraic multiplicity measures the singularity at the origin. In this research monograph several strategies are given to make the algebraic multiplicity of a three-dimensional vector field decrease, by means of permissible blowing-ups of the ambient space, i.e. transformations of the type xi=x'ix1, 2s. A logarithmic point of view is taken, marking the exceptional divisor of each blowing-up and by considering only the vector fields which are tangent to this divisor, instead of the whole tangent sheaf. The first part of the book is devoted to the logarithmic background and to the permissible blowing-ups. The main part corresponds to the control of the algorithms for the desingularization strategies by means of numerical invariants inspired by Hironaka's characteristic polygon. Only basic knowledge of local algebra and algebraic geometry is assumed of the reader. The pathologies we find in the reduction of vector fields are analogous to pathologies in the pro...

  6. The curvature and the algebra of Killing vectors in five-dimensional space

    International Nuclear Information System (INIS)

    Rcheulishvili, G.

    1990-12-01

    This paper presents the Killing vectors for a five-dimensional space with the line element. The algebras which are formed by these vectors are written down. The curvature two-forms are described. (author). 10 refs

  7. Absolute continuity of autophage measures on finite-dimensional vector spaces

    Energy Technology Data Exchange (ETDEWEB)

    Raja, C R.E. [Stat-Math Unit, Indian Statistical Institute, Bangalore (India); [Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)]. E-mail: creraja@isibang.ac.in

    2002-06-01

    We consider a class of measures called autophage which was introduced and studied by Szekely for measures on the real line. We show that the autophage measures on finite-dimensional vector spaces over real or Q{sub p} are infinitely divisible without idempotent factors and are absolutely continuous with bounded continuous density. We also show that certain semistable measures on such vector spaces are absolutely continuous. (author)

  8. Anisotropic fractal media by vector calculus in non-integer dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru [Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow 119991 (Russian Federation)

    2014-08-15

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  9. Anisotropic fractal media by vector calculus in non-integer dimensional space

    Science.gov (United States)

    Tarasov, Vasily E.

    2014-08-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  10. Anisotropic fractal media by vector calculus in non-integer dimensional space

    International Nuclear Information System (INIS)

    Tarasov, Vasily E.

    2014-01-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media

  11. Oracle Inequalities for High Dimensional Vector Autoregressions

    DEFF Research Database (Denmark)

    Callot, Laurent; Kock, Anders Bredahl

    This paper establishes non-asymptotic oracle inequalities for the prediction error and estimation accuracy of the LASSO in stationary vector autoregressive models. These inequalities are used to establish consistency of the LASSO even when the number of parameters is of a much larger order...

  12. Additional neutral vector boson in the 7-dimensional theory of gravy-electro-weak interactions

    International Nuclear Information System (INIS)

    Gavrilov, V.R.

    1988-01-01

    Possibilities of manifestation of an additional neutron vector boson, the existence of which is predicted by the 7-dimensional theory of gravy-electro-weak interactions, are analyzed. A particular case of muon neutrino scattering on a muon is considered. In this case additional neutral current manifests both at high and at relatively low energies of particle collisions

  13. Eruptive Massive Vector Particles of 5-Dimensional Kerr-Gödel Spacetime

    Science.gov (United States)

    Övgün, A.; Sakalli, I.

    2018-02-01

    In this paper, we investigate Hawking radiation of massive spin-1 particles from 5-dimensional Kerr-Gödel spacetime. By applying the WKB approximation and the Hamilton-Jacobi ansatz to the relativistic Proca equation, we obtain the quantum tunneling rate of the massive vector particles. Using the obtained tunneling rate, we show how one impeccably computes the Hawking temperature of the 5-dimensional Kerr-Gödel spacetime.

  14. Vectorization, parallelization and porting of nuclear codes. Vectorization and parallelization. Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-02-01

    Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)

  15. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    Directory of Open Access Journals (Sweden)

    Zhang Jing

    2016-01-01

    Full Text Available To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR and feature vector transformation (FVT method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.

  16. A static investigation of yaw vectoring concepts on two-dimensional convergent-divergent nozzles

    Science.gov (United States)

    Berrier, B. L.; Mason, M. L.

    1983-01-01

    The flow-turning capability and nozzle internal performance of yaw-vectoring nozzle geometries were tested in the NASA Langley 16-ft Transonic wind tunnel. The concept was investigated as a means of enhancing fighter jet performance. Five two-dimensional convergent-divergent nozzles were equipped for yaw-vectoring and examined. The configurations included a translating left sidewall, left and right sidewall flaps downstream of the nozzle throat, left sidewall flaps or port located upstream of the nozzle throat, and a powered rudder. Trials were also run with 20 deg of pitch thrust vectoring added. The feasibility of providing yaw-thrust vectoring was demonstrated, with the largest yaw vector angles being obtained with sidewall flaps downstream of the nozzle primary throat. It was concluded that yaw vector designs that scoop or capture internal nozzle flow provide the largest yaw-vector capability, but decrease the thrust the most.

  17. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  18. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    Science.gov (United States)

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  19. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  20. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  1. Kochen-Specker vectors

    International Nuclear Information System (INIS)

    Pavicic, Mladen; Merlet, Jean-Pierre; McKay, Brendan; Megill, Norman D

    2005-01-01

    We give a constructive and exhaustive definition of Kochen-Specker (KS) vectors in a Hilbert space of any dimension as well as of all the remaining vectors of the space. KS vectors are elements of any set of orthonormal states, i.e., vectors in an n-dimensional Hilbert space, H n , n≥3, to which it is impossible to assign 1s and 0s in such a way that no two mutually orthogonal vectors from the set are both assigned 1 and that not all mutually orthogonal vectors are assigned 0. Our constructive definition of such KS vectors is based on algorithms that generate MMP diagrams corresponding to blocks of orthogonal vectors in R n , on algorithms that single out those diagrams on which algebraic (0)-(1) states cannot be defined, and on algorithms that solve nonlinear equations describing the orthogonalities of the vectors by means of statistically polynomially complex interval analysis and self-teaching programs. The algorithms are limited neither by the number of dimensions nor by the number of vectors. To demonstrate the power of the algorithms, all four-dimensional KS vector systems containing up to 24 vectors were generated and described, all three-dimensional vector systems containing up to 30 vectors were scanned, and several general properties of KS vectors were found

  2. General Dimensional Multiple-Output Support Vector Regressions and Their Multiple Kernel Learning.

    Science.gov (United States)

    Chung, Wooyong; Kim, Jisu; Lee, Heejin; Kim, Euntai

    2015-11-01

    Support vector regression has been considered as one of the most important regression or function approximation methodologies in a variety of fields. In this paper, two new general dimensional multiple output support vector regressions (MSVRs) named SOCPL1 and SOCPL2 are proposed. The proposed methods are formulated in the dual space and their relationship with the previous works is clearly investigated. Further, the proposed MSVRs are extended into the multiple kernel learning and their training is implemented by the off-the-shelf convex optimization tools. The proposed MSVRs are applied to benchmark problems and their performances are compared with those of the previous methods in the experimental section.

  3. Vector Boson Scattering at High Mass

    CERN Document Server

    Sherwood, P

    2009-01-01

    In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate W W scalar and vector resonances, W Z vector resonances and a Z Z scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons.

  4. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    Science.gov (United States)

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at

  5. Vector Boson Scattering at High Mass

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate $WW $scalar and vector resonances, $WZ$ vector resonances and a $ZZ$ scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application of forward jet tagging and to the reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons. The performances of different jet algorithms are compared. We find that resonances in vector boson scattering can be discovered with a few tens of inverse femtobarns of integrated luminosity.

  6. High frequency vibration analysis by the complex envelope vectorization.

    Science.gov (United States)

    Giannini, O; Carcaterra, A; Sestieri, A

    2007-06-01

    The complex envelope displacement analysis (CEDA) is a procedure to solve high frequency vibration and vibro-acoustic problems, providing the envelope of the physical solution. CEDA is based on a variable transformation mapping the high frequency oscillations into signals of low frequency content and has been successfully applied to one-dimensional systems. However, the extension to plates and vibro-acoustic fields met serious difficulties so that a general revision of the theory was carried out, leading finally to a new method, the complex envelope vectorization (CEV). In this paper the CEV method is described, underlying merits and limits of the procedure, and a set of applications to vibration and vibro-acoustic problems of increasing complexity are presented.

  7. Suggested Courseware for the Non-Calculus Physics Student: Measurement, Vectors, and One-Dimensional Motion.

    Science.gov (United States)

    Mahoney, Joyce; And Others

    1988-01-01

    Evaluates 16 commercially available courseware packages covering topics for introductory physics. Discusses the price, sub-topics, program type, interaction, time, calculus required, graphics, and comments of each program. Recommends two packages in measurement and vectors, and one-dimensional motion respectively. (YP)

  8. String vacuum backgrounds with covariantly constant null Killing vector and two-dimensional quantum gravity

    International Nuclear Information System (INIS)

    Tseytlin, A.A.

    1993-01-01

    We consider a two-dimensional sigma model with a (2+N)-dimensional Minkowski signature target space metric having a covariantly constant null Killing vector. We study solutions of the conformal invariance conditions in 2+N dimensions and find that generic solutions can be represented in terms of the RG flow in N-dimensional 'transverse space' theory. The resulting conformal invariant sigma model is interpreted as a quantum action of the two-dimensional scalar ('dilaton') quantum gravity model coupled to a (non-conformal) 'transverse' sigma model. The conformal factor of the two-dimensional metric is identified with a light-cone coordinate of the (2+N)-dimensional sigma model. We also discuss the case when the transverse theory is conformal (with or without the antisymmetric tensor background) and reproduce in a systematic way the solutions with flat transverse space known before. (orig.)

  9. Codimension-one tangency bifurcations of global Poincare maps of four-dimensional vector fields

    NARCIS (Netherlands)

    Krauskopf, B.; Lee, C.M.; Osinga, H.M.

    2009-01-01

    When one considers a Poincarreturn map on a general unbounded (n - 1)-dimensional section for a vector field in R-n there are typically points where the flow is tangent to the section. The only notable exception is when the system is (equivalent to) a periodically forced system. The tangencies can

  10. Low Dimensional Representation of Fisher Vectors for Microscopy Image Classification.

    Science.gov (United States)

    Song, Yang; Li, Qing; Huang, Heng; Feng, Dagan; Chen, Mei; Cai, Weidong

    2017-08-01

    Microscopy image classification is important in various biomedical applications, such as cancer subtype identification, and protein localization for high content screening. To achieve automated and effective microscopy image classification, the representative and discriminative capability of image feature descriptors is essential. To this end, in this paper, we propose a new feature representation algorithm to facilitate automated microscopy image classification. In particular, we incorporate Fisher vector (FV) encoding with multiple types of local features that are handcrafted or learned, and we design a separation-guided dimension reduction method to reduce the descriptor dimension while increasing its discriminative capability. Our method is evaluated on four publicly available microscopy image data sets of different imaging types and applications, including the UCSB breast cancer data set, MICCAI 2015 CBTC challenge data set, and IICBU malignant lymphoma, and RNAi data sets. Our experimental results demonstrate the advantage of the proposed low-dimensional FV representation, showing consistent performance improvement over the existing state of the art and the commonly used dimension reduction techniques.

  11. Vector Casimir effect for a D-dimensional sphere

    International Nuclear Information System (INIS)

    Milton, K.A.

    1997-01-01

    The Casimir energy or stress due to modes in a D-dimensional volume subject to TM (mixed) boundary conditions on a bounding spherical surface is calculated. Both interior and exterior modes are included. Together with earlier results found for scalar modes (TE modes), this gives the Casimir effect for fluctuating open-quotes electromagneticclose quotes (vector) fields inside and outside a spherical shell. Known results for three dimensions, first found by Boyer, are reproduced. Qualitatively, the results for TM modes are similar to those for scalar modes: Poles occur in the stress at positive even dimensions, and cusps (logarithmic singularities) occur for integer dimensions D≤1. Particular attention is given the interesting case of D=2. copyright 1997 The American Physical Society

  12. Higher-dimensional generalizations of the Watanabe–Strogatz transform for vector models of synchronization

    Science.gov (United States)

    Lohe, M. A.

    2018-06-01

    We generalize the Watanabe–Strogatz (WS) transform, which acts on the Kuramoto model in d  =  2 dimensions, to a higher-dimensional vector transform which operates on vector oscillator models of synchronization in any dimension , for the case of identical frequency matrices. These models have conserved quantities constructed from the cross ratios of inner products of the vector variables, which are invariant under the vector transform, and have trajectories which lie on the unit sphere S d‑1. Application of the vector transform leads to a partial integration of the equations of motion, leaving independent equations to be solved, for any number of nodes N. We discuss properties of complete synchronization and use the reduced equations to derive a stability condition for completely synchronized trajectories on S d‑1. We further generalize the vector transform to a mapping which acts in and in particular preserves the unit ball , and leaves invariant the cross ratios constructed from inner products of vectors in . This mapping can be used to partially integrate a system of vector oscillators with trajectories in , and for d  =  2 leads to an extension of the Kuramoto system to a system of oscillators with time-dependent amplitudes and trajectories in the unit disk. We find an inequivalent generalization of the Möbius map which also preserves but leaves invariant a different set of cross ratios, this time constructed from the vector norms. This leads to a different extension of the Kuramoto model with trajectories in the complex plane that can be partially integrated by means of fractional linear transformations.

  13. Spatial optical (2+1)-dimensional scalar- and vector-solitons in saturable nonlinear media

    Energy Technology Data Exchange (ETDEWEB)

    Weilnau, C.; Traeger, D.; Schroeder, J.; Denz, C. [Institute of Applied Physics, Westfaelische Wilhelms-Universitaet Muenster, Corrensstr. 2/4, 48149 Muenster (Germany); Ahles, M.; Petter, J. [Institute of Applied Physics, Technische Universitaet Darmstadt, Hochschulstr. 6, 64289 Darmstadt (Germany)

    2002-10-01

    (2+1)-dimensional optical spatial solitons have become a major field of research in nonlinear physics throughout the last decade due to their potential in adaptive optical communication technologies. With the help of photorefractive crystals that supply the required type of nonlinearity for soliton generation, we are able to demonstrate experimentally the formation, the dynamic properties, and especially the interaction of solitary waves, which were so far only known from general soliton theory. Among the complex interaction scenarios of scalar solitons, we reveal a distinct behavior denoted as anomalous interaction, which is unique in soliton-supporting systems. Further on, we realize highly parallel, light-induced waveguide configurations based on photorefractive screening solitons that give rise to technical applications towards waveguide couplers and dividers as well as all-optical information processing devices where light is controlled by light itself. Finally, we demonstrate the generation, stability and propagation dynamics of multi-component or vector solitons, multipole transverse optical structures bearing a complex geometry. In analogy to the particle-light dualism of scalar solitons, various types of vector solitons can - in a broader sense - be interpreted as molecules of light. (Abstract Copyright [2002], Wiley Periodicals, Inc.)

  14. Spatial optical (2+1)-dimensional scalar- and vector-solitons in saturable nonlinear media

    International Nuclear Information System (INIS)

    Weilnau, C.; Traeger, D.; Schroeder, J.; Denz, C.; Ahles, M.; Petter, J.

    2002-01-01

    (2+1)-dimensional optical spatial solitons have become a major field of research in nonlinear physics throughout the last decade due to their potential in adaptive optical communication technologies. With the help of photorefractive crystals that supply the required type of nonlinearity for soliton generation, we are able to demonstrate experimentally the formation, the dynamic properties, and especially the interaction of solitary waves, which were so far only known from general soliton theory. Among the complex interaction scenarios of scalar solitons, we reveal a distinct behavior denoted as anomalous interaction, which is unique in soliton-supporting systems. Further on, we realize highly parallel, light-induced waveguide configurations based on photorefractive screening solitons that give rise to technical applications towards waveguide couplers and dividers as well as all-optical information processing devices where light is controlled by light itself. Finally, we demonstrate the generation, stability and propagation dynamics of multi-component or vector solitons, multipole transverse optical structures bearing a complex geometry. In analogy to the particle-light dualism of scalar solitons, various types of vector solitons can - in a broader sense - be interpreted as molecules of light. (Abstract Copyright [2002], Wiley Periodicals, Inc.)

  15. A method for real-time three-dimensional vector velocity imaging

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Nikolov, Svetoslav

    2003-01-01

    The paper presents an approach for making real-time three-dimensional vector flow imaging. Synthetic aperture data acquisition is used, and the data is beamformed along the flow direction to yield signals usable for flow estimation. The signals are cross-related to determine the shift in position...... are done using 16 × 16 = 256 elements at a time and the received signals from the same elements are sampled. Access to the individual elements is done through 16-to-1 multiplexing, so that only a 256 channels transmitting and receiving system are needed. The method has been investigated using Field II...

  16. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  17. Static investigation of two fluidic thrust-vectoring concepts on a two-dimensional convergent-divergent nozzle

    Science.gov (United States)

    Wing, David J.

    1994-01-01

    A static investigation was conducted in the static test facility of the Langley 16-Foot Transonic Tunnel of two thrust-vectoring concepts which utilize fluidic mechanisms for deflecting the jet of a two-dimensional convergent-divergent nozzle. One concept involved using the Coanda effect to turn a sheet of injected secondary air along a curved sidewall flap and, through entrainment, draw the primary jet in the same direction to produce yaw thrust vectoring. The other concept involved deflecting the primary jet to produce pitch thrust vectoring by injecting secondary air through a transverse slot in the divergent flap, creating an oblique shock in the divergent channel. Utilizing the Coanda effect to produce yaw thrust vectoring was largely unsuccessful. Small vector angles were produced at low primary nozzle pressure ratios, probably because the momentum of the primary jet was low. Significant pitch thrust vector angles were produced by injecting secondary flow through a slot in the divergent flap. Thrust vector angle decreased with increasing nozzle pressure ratio but moderate levels were maintained at the highest nozzle pressure ratio tested. Thrust performance generally increased at low nozzle pressure ratios and decreased near the design pressure ratio with the addition of secondary flow.

  18. Linear stability theory as an early warning sign for transitions in high dimensional complex systems

    International Nuclear Information System (INIS)

    Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft

    2016-01-01

    We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)

  19. Predicting respiratory tumor motion with multi-dimensional adaptive filters and support vector regression

    International Nuclear Information System (INIS)

    Riaz, Nadeem; Wiersma, Rodney; Mao Weihua; Xing Lei; Shanker, Piyush; Gudmundsson, Olafur; Widrow, Bernard

    2009-01-01

    Intra-fraction tumor tracking methods can improve radiation delivery during radiotherapy sessions. Image acquisition for tumor tracking and subsequent adjustment of the treatment beam with gating or beam tracking introduces time latency and necessitates predicting the future position of the tumor. This study evaluates the use of multi-dimensional linear adaptive filters and support vector regression to predict the motion of lung tumors tracked at 30 Hz. We expand on the prior work of other groups who have looked at adaptive filters by using a general framework of a multiple-input single-output (MISO) adaptive system that uses multiple correlated signals to predict the motion of a tumor. We compare the performance of these two novel methods to conventional methods like linear regression and single-input, single-output adaptive filters. At 400 ms latency the average root-mean-square-errors (RMSEs) for the 14 treatment sessions studied using no prediction, linear regression, single-output adaptive filter, MISO and support vector regression are 2.58, 1.60, 1.58, 1.71 and 1.26 mm, respectively. At 1 s, the RMSEs are 4.40, 2.61, 3.34, 2.66 and 1.93 mm, respectively. We find that support vector regression most accurately predicts the future tumor position of the methods studied and can provide a RMSE of less than 2 mm at 1 s latency. Also, a multi-dimensional adaptive filter framework provides improved performance over single-dimension adaptive filters. Work is underway to combine these two frameworks to improve performance.

  20. Unidirectional Wave Vector Manipulation in Two-Dimensional Space with an All Passive Acoustic Parity-Time-Symmetric Metamaterials Crystal

    Science.gov (United States)

    Liu, Tuo; Zhu, Xuefeng; Chen, Fei; Liang, Shanjun; Zhu, Jie

    2018-03-01

    Exploring the concept of non-Hermitian Hamiltonians respecting parity-time symmetry with classical wave systems is of great interest as it enables the experimental investigation of parity-time-symmetric systems through the quantum-classical analogue. Here, we demonstrate unidirectional wave vector manipulation in two-dimensional space, with an all passive acoustic parity-time-symmetric metamaterials crystal. The metamaterials crystal is constructed through interleaving groove- and holey-structured acoustic metamaterials to provide an intrinsic parity-time-symmetric potential that is two-dimensionally extended and curved, which allows the flexible manipulation of unpaired wave vectors. At the transition point from the unbroken to broken parity-time symmetry phase, the unidirectional sound focusing effect (along with reflectionless acoustic transparency in the opposite direction) is experimentally realized over the spectrum. This demonstration confirms the capability of passive acoustic systems to carry the experimental studies on general parity-time symmetry physics and further reveals the unique functionalities enabled by the judiciously tailored unidirectional wave vectors in space.

  1. Vectors and Rotations in 3-Dimensions: Vector Algebra for the C++ Programmer

    Science.gov (United States)

    2016-12-01

    release; distribution is unlimited. 1. Introduction This report describes 2 C++ classes: a Vector class for performing vector algebra in 3-dimensional...ARL-TR-7894•DEC 2016 US Army Research Laboratory Vectors and Rotations in 3-Dimensions:Vector Algebra for the C++ Programmer by Richard Saucier...Army Research Laboratory Vectors and Rotations in 3-Dimensions:Vector Algebra for the C++ Programmer by Richard Saucier Survivability/Lethality

  2. The validation and assessment of machine learning: a game of prediction from high-dimensional data

    DEFF Research Database (Denmark)

    Pers, Tune Hannes; Albrechtsen, A; Holst, C

    2009-01-01

    In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....

  3. Robust Pseudo-Hierarchical Support Vector Clustering

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Sjöstrand, Karl; Olafsdóttir, Hildur

    2007-01-01

    Support vector clustering (SVC) has proven an efficient algorithm for clustering of noisy and high-dimensional data sets, with applications within many fields of research. An inherent problem, however, has been setting the parameters of the SVC algorithm. Using the recent emergence of a method...... for calculating the entire regularization path of the support vector domain description, we propose a fast method for robust pseudo-hierarchical support vector clustering (HSVC). The method is demonstrated to work well on generated data, as well as for detecting ischemic segments from multidimensional myocardial...

  4. Mining High-Dimensional Data

    Science.gov (United States)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  5. New techniques for the scientific visualization of three-dimensional multi-variate and vector fields

    Energy Technology Data Exchange (ETDEWEB)

    Crawfis, Roger A. [Univ. of California, Davis, CA (United States)

    1995-10-01

    Volume rendering allows us to represent a density cloud with ideal properties (single scattering, no self-shadowing, etc.). Scientific visualization utilizes this technique by mapping an abstract variable or property in a computer simulation to a synthetic density cloud. This thesis extends volume rendering from its limitation of isotropic density clouds to anisotropic and/or noisy density clouds. Design aspects of these techniques are discussed that aid in the comprehension of scientific information. Anisotropic volume rendering is used to represent vector based quantities in scientific visualization. Velocity and vorticity in a fluid flow, electric and magnetic waves in an electromagnetic simulation, and blood flow within the body are examples of vector based information within a computer simulation or gathered from instrumentation. Understand these fields can be crucial to understanding the overall physics or physiology. Three techniques for representing three-dimensional vector fields are presented: Line Bundles, Textured Splats and Hair Splats. These techniques are aimed at providing a high-level (qualitative) overview of the flows, offering the user a substantial amount of information with a single image or animation. Non-homogenous volume rendering is used to represent multiple variables. Computer simulations can typically have over thirty variables, which describe properties whose understanding are useful to the scientist. Trying to understand each of these separately can be time consuming. Trying to understand any cause and effect relationships between different variables can be impossible. NoiseSplats is introduced to represent two or more properties in a single volume rendering of the data. This technique is also aimed at providing a qualitative overview of the flows.

  6. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan

    2017-03-27

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  7. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert; Ombao, Hernando

    2017-01-01

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  8. Three-Dimensional Electromagnetic High Frequency Axisymmetric Cavity Scars.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt

    2014-10-01

    This report examines the localization of high frequency electromagnetic fi elds in three-dimensional axisymmetric cavities along periodic paths between opposing sides of the cavity. The cases where these orbits lead to unstable localized modes are known as scars. This report treats both the case where the opposing sides, or mirrors, are convex, where there are no interior foci, and the case where they are concave, leading to interior foci. The scalar problem is treated fi rst but the approximations required to treat the vector fi eld components are also examined. Particular att ention is focused on the normalization through the electromagnetic energy theorem. Both projections of the fi eld along the scarred orbit as well as point statistics are examined. Statistical comparisons are m ade with a numerical calculation of the scars run with an axisymmetric simulation. This axisymmetric cas eformstheoppositeextreme(wherethetwomirror radii at each end of the ray orbit are equal) from the two -dimensional solution examined previously (where one mirror radius is vastly di ff erent from the other). The enhancement of the fi eldontheorbitaxiscanbe larger here than in the two-dimensional case. Intentionally Left Blank

  9. Music Signal Processing Using Vector Product Neural Networks

    Science.gov (United States)

    Fan, Z. C.; Chan, T. S.; Yang, Y. H.; Jang, J. S. R.

    2017-05-01

    We propose a novel neural network model for music signal processing using vector product neurons and dimensionality transformations. Here, the inputs are first mapped from real values into three-dimensional vectors then fed into a three-dimensional vector product neural network where the inputs, outputs, and weights are all three-dimensional values. Next, the final outputs are mapped back to the reals. Two methods for dimensionality transformation are proposed, one via context windows and the other via spectral coloring. Experimental results on the iKala dataset for blind singing voice separation confirm the efficacy of our model.

  10. Visualizing vector field topology in fluid flows

    Science.gov (United States)

    Helman, James L.; Hesselink, Lambertus

    1991-01-01

    Methods of automating the analysis and display of vector field topology in general and flow topology in particular are discussed. Two-dimensional vector field topology is reviewed as the basis for the examination of topology in three-dimensional separated flows. The use of tangent surfaces and clipping in visualizing vector field topology in fluid flows is addressed.

  11. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  12. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  13. Equivalent Vectors

    Science.gov (United States)

    Levine, Robert

    2004-01-01

    The cross-product is a mathematical operation that is performed between two 3-dimensional vectors. The result is a vector that is orthogonal or perpendicular to both of them. Learning about this for the first time while taking Calculus-III, the class was taught that if AxB = AxC, it does not necessarily follow that B = C. This seemed baffling. The…

  14. All ASD complex and real 4-dimensional Einstein spaces with Λ≠0 admitting a nonnull Killing vector

    Science.gov (United States)

    Chudecki, Adam

    2016-12-01

    Anti-self-dual (ASD) 4-dimensional complex Einstein spaces with nonzero cosmological constant Λ equipped with a nonnull Killing vector are considered. It is shown that any conformally nonflat metric of such spaces can be always brought to a special form and the Einstein field equations can be reduced to the Boyer-Finley-Plebański equation (Toda field equation). Some alternative forms of the metric are discussed. All possible real slices (neutral, Euclidean and Lorentzian) of ASD complex Einstein spaces with Λ≠0 admitting a nonnull Killing vector are found.

  15. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    Science.gov (United States)

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  16. Local Patch Vectors Encoded by Fisher Vectors for Image Classification

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2018-02-01

    Full Text Available The objective of this work is image classification, whose purpose is to group images into corresponding semantic categories. Four contributions are made as follows: (i For computational simplicity and efficiency, we directly adopt raw image patch vectors as local descriptors encoded by Fisher vector (FV subsequently; (ii For obtaining representative local features within the FV encoding framework, we compare and analyze three typical sampling strategies: random sampling, saliency-based sampling and dense sampling; (iii In order to embed both global and local spatial information into local features, we construct an improved spatial geometry structure which shows good performance; (iv For reducing the storage and CPU costs of high dimensional vectors, we adopt a new feature selection method based on supervised mutual information (MI, which chooses features by an importance sorting algorithm. We report experimental results on dataset STL-10. It shows very promising performance with this simple and efficient framework compared to conventional methods.

  17. Signed zeros of Gaussian vector fields - density, correlation functions and curvature

    CERN Document Server

    Foltin, G

    2003-01-01

    We calculate correlation functions of the (signed) density of zeros of Gaussian distributed vector fields. We are able to express correlation functions of arbitrary order through the curvature tensor of a certain abstract Riemann Cartan or Riemannian manifold. As an application, we discuss one- and two-point functions. The zeros of a two-dimensional Gaussian vector field model the distribution of topological defects in the high-temperature phase of two-dimensional systems with orientational degrees of freedom, such as superfluid films, thin superconductors and liquid crystals.

  18. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  19. A terrestrial lidar-based workflow for determining three-dimensional slip vectors and associated uncertainties

    Science.gov (United States)

    Gold, Peter O.; Cowgill, Eric; Kreylos, Oliver; Gold, Ryan D.

    2012-01-01

    Three-dimensional (3D) slip vectors recorded by displaced landforms are difficult to constrain across complex fault zones, and the uncertainties associated with such measurements become increasingly challenging to assess as landforms degrade over time. We approach this problem from a remote sensing perspective by using terrestrial laser scanning (TLS) and 3D structural analysis. We have developed an integrated TLS data collection and point-based analysis workflow that incorporates accurate assessments of aleatoric and epistemic uncertainties using experimental surveys, Monte Carlo simulations, and iterative site reconstructions. Our scanning workflow and equipment requirements are optimized for single-operator surveying, and our data analysis process is largely completed using new point-based computing tools in an immersive 3D virtual reality environment. In a case study, we measured slip vector orientations at two sites along the rupture trace of the 1954 Dixie Valley earthquake (central Nevada, United States), yielding measurements that are the first direct constraints on the 3D slip vector for this event. These observations are consistent with a previous approximation of net extension direction for this event. We find that errors introduced by variables in our survey method result in <2.5 cm of variability in components of displacement, and are eclipsed by the 10–60 cm epistemic errors introduced by reconstructing the field sites to their pre-erosion geometries. Although the higher resolution TLS data sets enabled visualization and data interactivity critical for reconstructing the 3D slip vector and for assessing uncertainties, dense topographic constraints alone were not sufficient to significantly narrow the wide (<26°) range of allowable slip vector orientations that resulted from accounting for epistemic uncertainties.

  20. Clustering high dimensional data

    DEFF Research Database (Denmark)

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  1. Volume scanning three-dimensional display with an inclined two-dimensional display and a mirror scanner

    Science.gov (United States)

    Miyazaki, Daisuke; Kawanishi, Tsuyoshi; Nishimura, Yasuhiro; Matsushita, Kenji

    2001-11-01

    A new three-dimensional display system based on a volume-scanning method is demonstrated. To form a three-dimensional real image, an inclined two-dimensional image is rapidly moved with a mirror scanner while the cross-section patterns of a three-dimensional object are displayed sequentially. A vector-scan CRT display unit is used to obtain a high-resolution image. An optical scanning system is constructed with concave mirrors and a galvanometer mirror. It is confirmed that three-dimensional images, formed by the experimental system, satisfy all the criteria for human stereoscopic vision.

  2. Sums and Gaussian vectors

    CERN Document Server

    Yurinsky, Vadim Vladimirovich

    1995-01-01

    Surveys the methods currently applied to study sums of infinite-dimensional independent random vectors in situations where their distributions resemble Gaussian laws. Covers probabilities of large deviations, Chebyshev-type inequalities for seminorms of sums, a method of constructing Edgeworth-type expansions, estimates of characteristic functions for random vectors obtained by smooth mappings of infinite-dimensional sums to Euclidean spaces. A self-contained exposition of the modern research apparatus around CLT, the book is accessible to new graduate students, and can be a useful reference for researchers and teachers of the subject.

  3. Monte Carlo simulation of the three-state vector Potts model on a three-dimensional random lattice

    International Nuclear Information System (INIS)

    Jianbo Zhang; Heping Ying

    1991-09-01

    We have performed a numerical simulation of the three-state vector Potts model on a three-dimensional random lattice. The averages of energy density, magnetization, specific heat and susceptibility of the system in the N 3 (N=8,10,12) lattices were calculated. The results show that a first order nature of the Z(3) symmetry breaking transition appears, as characterized by a thermal hysterisis in the energy density as well as an abrupt drop of magnetization being sharper and discontinuous with increasing of volume in the cross-over region. The results obtained on the random lattice were consistent with those obtained on the three-dimensional cubic lattice. (author). 12 refs, 4 figs

  4. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    Directory of Open Access Journals (Sweden)

    András Király

    2014-01-01

    Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.

  5. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    International Nuclear Information System (INIS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-01-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  6. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Science.gov (United States)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  7. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  8. The Use of Sparse Direct Solver in Vector Finite Element Modeling for Calculating Two Dimensional (2-D) Magnetotelluric Responses in Transverse Electric (TE) Mode

    Science.gov (United States)

    Yihaa Roodhiyah, Lisa’; Tjong, Tiffany; Nurhasan; Sutarno, D.

    2018-04-01

    The late research, linear matrices of vector finite element in two dimensional(2-D) magnetotelluric (MT) responses modeling was solved by non-sparse direct solver in TE mode. Nevertheless, there is some weakness which have to be improved especially accuracy in the low frequency (10-3 Hz-10-5 Hz) which is not achieved yet and high cost computation in dense mesh. In this work, the solver which is used is sparse direct solver instead of non-sparse direct solverto overcome the weaknesses of solving linear matrices of vector finite element metod using non-sparse direct solver. Sparse direct solver will be advantageous in solving linear matrices of vector finite element method because of the matrix properties which is symmetrical and sparse. The validation of sparse direct solver in solving linear matrices of vector finite element has been done for a homogen half-space model and vertical contact model by analytical solution. Thevalidation result of sparse direct solver in solving linear matrices of vector finite element shows that sparse direct solver is more stable than non-sparse direct solver in computing linear problem of vector finite element method especially in low frequency. In the end, the accuracy of 2D MT responses modelling in low frequency (10-3 Hz-10-5 Hz) has been reached out under the efficient allocation memory of array and less computational time consuming.

  9. Brane vector phenomenology

    International Nuclear Information System (INIS)

    Clark, T.E.; Love, S.T.; Nitta, Muneto; Veldhuis, T. ter; Xiong, C.

    2009-01-01

    Local oscillations of the brane world are manifested as massive vector fields. Their coupling to the Standard Model can be obtained using the method of nonlinear realizations of the spontaneously broken higher-dimensional space-time symmetries, and to an extent, are model independent. Phenomenological limits on these vector field parameters are obtained using LEP collider data and dark matter constraints

  10. High-speed vector-processing system of the MELCOM-COSMO 900II

    Energy Technology Data Exchange (ETDEWEB)

    Masuda, K; Mori, H; Fujikake, J; Sasaki, Y

    1983-01-01

    Progress in scientific and technical calculations has lead to a growing demand for high-speed vector calculations. Mitsubishi electric has developed an integrated array processor and automatic-vectorizing fortran compiler as an option for the MELCOM-COSMO 900II computer system. This facilitates the performance of vector calculations and matrix calculations, achieving significant gains in cost-effectiveness. The article outlines the high-speed vector system, includes discussion of compiler structuring, and cites examples of effective system application. 1 reference.

  11. Vectors and their applications

    CERN Document Server

    Pettofrezzo, Anthony J

    2005-01-01

    Geared toward undergraduate students, this text illustrates the use of vectors as a mathematical tool in plane synthetic geometry, plane and spherical trigonometry, and analytic geometry of two- and three-dimensional space. Its rigorous development includes a complete treatment of the algebra of vectors in the first two chapters.Among the text's outstanding features are numbered definitions and theorems in the development of vector algebra, which appear in italics for easy reference. Most of the theorems include proofs, and coordinate position vectors receive an in-depth treatment. Key concept

  12. Completeness of the System of Root Vectors of 2 × 2 Upper Triangular Infinite-Dimensional Hamiltonian Operators in Symplectic Spaces and Applications

    Institute of Scientific and Technical Information of China (English)

    Hua WANG; ALATANCANG; Junjie HUANG

    2011-01-01

    The authors investigate the completeness of the system of eigen or root vectors of the 2 x 2 upper triangular infinite-dimensional Hamiltonian operator H0.First,the geometrical multiplicity and the algebraic index of the eigenvalue of H0 are considered.Next,some necessary and sufficient conditions for the completeness of the system of eigen or root vectors of H0 are obtained. Finally,the obtained results are tested in several examples.

  13. On vector fields having properties of Reeb fields

    OpenAIRE

    Hajduk, Boguslaw; Walczak, Rafal

    2011-01-01

    We study constructions of vector fields with properties which are characteristic to Reeb vector fields of contact forms. In particular, we prove that all closed oriented odd-dimensional manifold have geodesible vector fields.

  14. Multi-task Vector Field Learning.

    Science.gov (United States)

    Lin, Binbin; Yang, Sen; Zhang, Chiyuan; Ye, Jieping; He, Xiaofei

    2012-01-01

    Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously and identifying the shared information among tasks. Most of existing MTL methods focus on learning linear models under the supervised setting. We propose a novel semi-supervised and nonlinear approach for MTL using vector fields. A vector field is a smooth mapping from the manifold to the tangent spaces which can be viewed as a directional derivative of functions on the manifold. We argue that vector fields provide a natural way to exploit the geometric structure of data as well as the shared differential structure of tasks, both of which are crucial for semi-supervised multi-task learning. In this paper, we develop multi-task vector field learning (MTVFL) which learns the predictor functions and the vector fields simultaneously. MTVFL has the following key properties. (1) The vector fields MTVFL learns are close to the gradient fields of the predictor functions. (2) Within each task, the vector field is required to be as parallel as possible which is expected to span a low dimensional subspace. (3) The vector fields from all tasks share a low dimensional subspace. We formalize our idea in a regularization framework and also provide a convex relaxation method to solve the original non-convex problem. The experimental results on synthetic and real data demonstrate the effectiveness of our proposed approach.

  15. Fractional Killing-Yano Tensors and Killing Vectors Using the Caputo Derivative in Some One- and Two-Dimensional Curved Space

    Directory of Open Access Journals (Sweden)

    Ehab Malkawi

    2014-01-01

    Full Text Available The classical free Lagrangian admitting a constant of motion, in one- and two-dimensional space, is generalized using the Caputo derivative of fractional calculus. The corresponding metric is obtained and the fractional Christoffel symbols, Killing vectors, and Killing-Yano tensors are derived. Some exact solutions of these quantities are reported.

  16. VEST: Abstract vector calculus simplification in Mathematica

    Science.gov (United States)

    Squire, J.; Burby, J.; Qin, H.

    2014-01-01

    We present a new package, VEST (Vector Einstein Summation Tools), that performs abstract vector calculus computations in Mathematica. Through the use of index notation, VEST is able to reduce three-dimensional scalar and vector expressions of a very general type to a well defined standard form. In addition, utilizing properties of the Levi-Civita symbol, the program can derive types of multi-term vector identities that are not recognized by reduction, subsequently applying these to simplify large expressions. In a companion paper Burby et al. (2013) [12], we employ VEST in the automation of the calculation of high-order Lagrangians for the single particle guiding center system in plasma physics, a computation which illustrates its ability to handle very large expressions. VEST has been designed to be simple and intuitive to use, both for basic checking of work and more involved computations.

  17. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    Science.gov (United States)

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  18. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    Directory of Open Access Journals (Sweden)

    Lin Chen

    2016-01-01

    Full Text Available We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1 the projection vectors for dimension reduction, (2 the input weights, biases, and output weights, and (3 the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD approach, adaptive multihyperplane machine (AMM, primal estimated subgradient solver (Pegasos, online sequential extreme learning machine (OSELM, and SVD + OSELM (feature selection based on SVD is performed before OSELM. The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  19. Towards a physics on fractals: Differential vector calculus in three-dimensional continuum with fractal metric

    Science.gov (United States)

    Balankin, Alexander S.; Bory-Reyes, Juan; Shapiro, Michael

    2016-02-01

    One way to deal with physical problems on nowhere differentiable fractals is the mapping of these problems into the corresponding problems for continuum with a proper fractal metric. On this way different definitions of the fractal metric were suggested to account for the essential fractal features. In this work we develop the metric differential vector calculus in a three-dimensional continuum with a non-Euclidean metric. The metric differential forms and Laplacian are introduced, fundamental identities for metric differential operators are established and integral theorems are proved by employing the metric version of the quaternionic analysis for the Moisil-Teodoresco operator, which has been introduced and partially developed in this paper. The relations between the metric and conventional operators are revealed. It should be emphasized that the metric vector calculus developed in this work provides a comprehensive mathematical formalism for the continuum with any suitable definition of fractal metric. This offers a novel tool to study physics on fractals.

  20. Vectorization and improvement of nuclear codes. 3. DGR, STREAM V3.1, Cella, GGR

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki; Eguchi, Norikuni; Watanabe, Hideo; Machida, Masahiko; Yokokawa, Mitsuo; Fujii, Minoru [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-01-01

    Four nuclear codes have been vectorized and improved in order to realize the high speed performance on the VP2600 supercomputer at Computing and Information Systems Center of JAERI in the fiscal year 1993. Molecular Dynamics simulation code DGR which simulates the irradiation damage on diamond crystalline, three-dimensional non-steady compressible fluid dynamics code STREAM V3.1, two-dimensional fluid simulation code using Cell Automaton model Cella and Molecular Dynamics code GGR which simulates the irradiation damage on black carbon crystalline have been vectorized and improved, respectively. Speed up ratios by the vectorization to scalar mode on VP2600 show 2.8, 6.8-14.8, 15-16 and 1.23 times for DGR, STREAM V3.1, Cella and GGR, respectively. In this report, we present vectorization techniques, vectorization effects, evaluations of the numerical results and techniques for the improvement. (author).

  1. Noise-induced drift in two-dimensional anisotropic systems

    Science.gov (United States)

    Farago, Oded

    2017-10-01

    We study the isothermal Brownian dynamics of a particle in a system with spatially varying diffusivity. Due to the heterogeneity of the system, the particle's mean displacement does not vanish even if it does not experience any physical force. This phenomenon has been termed "noise-induced drift," and has been extensively studied for one-dimensional systems. Here, we examine the noise-induced drift in a two-dimensional anisotropic system, characterized by a symmetric diffusion tensor with unequal diagonal elements. A general expression for the mean displacement vector is derived and presented as a sum of two vectors, depicting two distinct drifting effects. The first vector describes the tendency of the particle to drift toward the high diffusivity side in each orthogonal principal diffusion direction. This is a generalization of the well-known expression for the noise-induced drift in one-dimensional systems. The second vector represents a novel drifting effect, not found in one-dimensional systems, originating from the spatial rotation in the directions of the principal axes. The validity of the derived expressions is verified by using Langevin dynamics simulations. As a specific example, we consider the relative diffusion of two transmembrane proteins, and demonstrate that the average distance between them increases at a surprisingly fast rate of several tens of micrometers per second.

  2. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  3. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  4. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  5. Symmetric vectors and algebraic classification

    International Nuclear Information System (INIS)

    Leibowitz, E.

    1980-01-01

    The concept of symmetric vector field in Riemannian manifolds, which arises in the study of relativistic cosmological models, is analyzed. Symmetric vectors are tied up with the algebraic properties of the manifold curvature. A procedure for generating a congruence of symmetric fields out of a given pair is outlined. The case of a three-dimensional manifold of constant curvature (''isotropic universe'') is studied in detail, with all its symmetric vector fields being explicitly constructed

  6. A new test for the mean vector in high-dimensional data

    Directory of Open Access Journals (Sweden)

    Knavoot Jiamwattanapong

    2015-08-01

    Full Text Available For the testing of the mean vector where the data are drawn from a multivariate normal population, the renowned Hotelling’s T 2 test is no longer valid when the dimension of the data equals or exceeds the sample size. In this study, we consider the problem of testing the hypothesis H :μ 0  and propose a new test based on the idea of keeping more information from the sample covariance matrix. The development of the statistic is based on Hotelling’s T 2 distribution and the new test has invariance property under a group of scalar transformation. The asymptotic distribution is derived under the null hypothesis. The simulation results show that the proposed test performs well and is more powerful when the data dimension increases for a given sample size. An analysis of DNA microarray data with the new test is demonstrated.

  7. Complex vector triads in spinor theory in Minkowski space

    International Nuclear Information System (INIS)

    Zhelnorovich, V.A.

    1990-01-01

    It is shown that tensor equations corresponding to the spinor Dirac equations represent a three-dimensional part of four-dimensional vector equations. The equations are formulated in an evidently invariant form in antisymmetric tensor components and in the corresponding components of a complex vector triad. A complete system of relativistically invariant tensor equations is ascertained

  8. Using a Feature Subset Selection method and Support Vector Machine to address curse of dimensionality and redundancy in Hyperion hyperspectral data classification

    Directory of Open Access Journals (Sweden)

    Amir Salimi

    2018-04-01

    Full Text Available The curse of dimensionality resulted from insufficient training samples and redundancy is considered as an important problem in the supervised classification of hyperspectral data. This problem can be handled by Feature Subset Selection (FSS methods and Support Vector Machine (SVM. The FSS methods can manage the redundancy by removing redundant spectral bands. Moreover, kernel based methods, especially SVM have a high ability to classify limited-sample data sets. This paper mainly aims to assess the capability of a FSS method and the SVM in curse of dimensional circumstances and to compare results with the Artificial Neural Network (ANN, when they are used to classify alteration zones of the Hyperion hyperspectral image acquired from the greatest Iranian porphyry copper complex. The results demonstrated that by decreasing training samples, the accuracy of SVM was just decreased 1.8% while the accuracy of ANN was highly reduced i.e. 14.01%. In addition, a hybrid FSS was applied to reduce the dimension of Hyperion. Accordingly, among the 165 useable spectral bands of Hyperion, 18 bands were only selected as the most important and informative bands. Although this dimensionality reduction could not intensively improve the performance of SVM, ANN revealed a significant improvement in the computational time and a slightly enhancement in the average accuracy. Therefore, SVM as a low-sensitive method respect to the size of training data set and feature space can be applied to classify the curse of dimensional problems. Also, the FSS methods can improve the performance of non-kernel based classifiers by eliminating redundant features. Keywords: Curse of dimensionality, Feature Subset Selection, Hydrothermal alteration, Hyperspectral, SVM

  9. Efficient Vector-Based Forwarding for Underwater Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng Xie

    2010-01-01

    Full Text Available Underwater Sensor Networks (UWSNs are significantly different from terrestrial sensor networks in the following aspects: low bandwidth, high latency, node mobility, high error probability, and 3-dimensional space. These new features bring many challenges to the network protocol design of UWSNs. In this paper, we tackle one fundamental problem in UWSNs: robust, scalable, and energy efficient routing. We propose vector-based forwarding (VBF, a geographic routing protocol. In VBF, the forwarding path is guided by a vector from the source to the target, no state information is required on the sensor nodes, and only a small fraction of the nodes is involved in routing. To improve the robustness, packets are forwarded in redundant and interleaved paths. Further, a localized and distributed self-adaptation algorithm allows the nodes to reduce energy consumption by discarding redundant packets. VBF performs well in dense networks. For sparse networks, we propose a hop-by-hop vector-based forwarding (HH-VBF protocol, which adapts the vector-based approach at every hop. We evaluate the performance of VBF and HH-VBF through extensive simulations. The simulation results show that VBF achieves high packet delivery ratio and energy efficiency in dense networks and HH-VBF has high packet delivery ratio even in sparse networks.

  10. A Fast and High-precision Orientation Algorithm for BeiDou Based on Dimensionality Reduction

    Directory of Open Access Journals (Sweden)

    ZHAO Jiaojiao

    2015-05-01

    Full Text Available A fast and high-precision orientation algorithm for BeiDou is proposed by deeply analyzing the constellation characteristics of BeiDou and GEO satellites features.With the advantage of good east-west geometry, the baseline vector candidate values were solved by the GEO satellites observations combined with the dimensionality reduction theory at first.Then, we use the ambiguity function to judge the values in order to obtain the optical baseline vector and get the wide lane integer ambiguities. On this basis, the B1 ambiguities were solved. Finally, the high-precision orientation was estimated by the determinating B1 ambiguities. This new algorithm not only can improve the ill-condition of traditional algorithm, but also can reduce the ambiguity search region to a great extent, thus calculating the integer ambiguities in a single-epoch.The algorithm is simulated by the actual BeiDou ephemeris and the result shows that the method is efficient and fast for orientation. It is capable of very high single-epoch success rate(99.31% and accurate attitude angle (the standard deviation of pitch and heading is respectively 0.07°and 0.13°in a real time and dynamic environment.

  11. The zero-dimensional O(N) vector model as a benchmark for perturbation theory, the large-N expansion and the functional renormalization group

    International Nuclear Information System (INIS)

    Keitel, Jan; Bartosch, Lorenz

    2012-01-01

    We consider the zero-dimensional O(N) vector model as a simple example to calculate n-point correlation functions using perturbation theory, the large-N expansion and the functional renormalization group (FRG). Comparing our findings with exact results, we show that perturbation theory breaks down for moderate interactions for all N, as one should expect. While the interaction-induced shift of the free energy and the self-energy are well described by the large-N expansion even for small N, this is not the case for higher order correlation functions. However, using the FRG in its one-particle irreducible formalism, we see that very few running couplings suffice to get accurate results for arbitrary N in the strong coupling regime, outperforming the large-N expansion for small N. We further remark on how the derivative expansion, a well-known approximation strategy for the FRG, reduces to an exact method for the zero-dimensional O(N) vector model. (paper)

  12. Algebra of Complex Vectors and Applications in Electromagnetic Theory and Quantum Mechanics

    Directory of Open Access Journals (Sweden)

    Kundeti Muralidhar

    2015-08-01

    Full Text Available A complex vector is a sum of a vector and a bivector and forms a natural extension of a vector. The complex vectors have certain special geometric properties and considered as algebraic entities. These represent rotations along with specified orientation and direction in space. It has been shown that the association of complex vector with its conjugate generates complex vector space and the corresponding basis elements defined from the complex vector and its conjugate form a closed complex four dimensional linear space. The complexification process in complex vector space allows the generation of higher n-dimensional geometric algebra from (n — 1-dimensional algebra by considering the unit pseudoscalar identification with square root of minus one. The spacetime algebra can be generated from the geometric algebra by considering a vector equal to square root of plus one. The applications of complex vector algebra are discussed mainly in the electromagnetic theory and in the dynamics of an elementary particle with extended structure. Complex vector formalism simplifies the expressions and elucidates geometrical understanding of the basic concepts. The analysis shows that the existence of spin transforms a classical oscillator into a quantum oscillator. In conclusion the classical mechanics combined with zeropoint field leads to quantum mechanics.

  13. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    Science.gov (United States)

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Application of Bred Vectors To Data Assimilation

    Science.gov (United States)

    Corazza, M.; Kalnay, E.; Patil, Dj

    We introduced a statistic, the BV-dimension, to measure the effective local finite-time dimensionality of the atmosphere. We show that this dimension is often quite low, and suggest that this finding has important implications for data assimilation and the accuracy of weather forecasting (Patil et al, 2001). The original database for this study was the forecasts of the NCEP global ensemble forecasting system. The initial differences between the control forecast and the per- turbed forecasts are called bred vectors. The control and perturbed initial conditions valid at time t=n(t are evolved using the forecast model until time t=(n+1) (t. The differences between the perturbed and the control forecasts are scaled down to their initial amplitude, and constitute the bred vectors valid at (n+1) (t. Their growth rate is typically about 1.5/day. The bred vectors are similar by construction to leading Lya- punov vectors except that they have small but finite amplitude, and they are valid at finite times. The original NCEP ensemble data set has 5 independent bred vectors. We define a local bred vector at each grid point by choosing the 5 by 5 grid points centered at the grid point (a region of about 1100km by 1100km), and using the north-south and east- west velocity components at 500mb pressure level to form a 50 dimensional column vector. Since we have k=5 global bred vectors, we also have k local bred vectors at each grid point. We estimate the effective dimensionality of the subspace spanned by the local bred vectors by performing a singular value decomposition (EOF analysis). The k local bred vector columns form a 50xk matrix M. The singular values s(i) of M measure the extent to which the k column unit vectors making up the matrix M point in the direction of v(i). We define the bred vector dimension as BVDIM={Sum[s(i)]}^2/{Sum[s(i)]^2} For example, if 4 out of the 5 vectors lie along v, and one lies along v, the BV- dimension would be BVDIM[sqrt(4), 1, 0

  15. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (vectorization). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Kawasaki, Nobuo [and others

    1997-12-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the vectorization. In this vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. In the parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. And then, in the porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. (author)

  16. Toward lattice fractional vector calculus

    International Nuclear Information System (INIS)

    Tarasov, Vasily E

    2014-01-01

    An analog of fractional vector calculus for physical lattice models is suggested. We use an approach based on the models of three-dimensional lattices with long-range inter-particle interactions. The lattice analogs of fractional partial derivatives are represented by kernels of lattice long-range interactions, where the Fourier series transformations of these kernels have a power-law form with respect to wave vector components. In the continuum limit, these lattice partial derivatives give derivatives of non-integer order with respect to coordinates. In the three-dimensional description of the non-local continuum, the fractional differential operators have the form of fractional partial derivatives of the Riesz type. As examples of the applications of the suggested lattice fractional vector calculus, we give lattice models with long-range interactions for the fractional Maxwell equations of non-local continuous media and for the fractional generalization of the Mindlin and Aifantis continuum models of gradient elasticity. (papers)

  17. Toward lattice fractional vector calculus

    Science.gov (United States)

    Tarasov, Vasily E.

    2014-09-01

    An analog of fractional vector calculus for physical lattice models is suggested. We use an approach based on the models of three-dimensional lattices with long-range inter-particle interactions. The lattice analogs of fractional partial derivatives are represented by kernels of lattice long-range interactions, where the Fourier series transformations of these kernels have a power-law form with respect to wave vector components. In the continuum limit, these lattice partial derivatives give derivatives of non-integer order with respect to coordinates. In the three-dimensional description of the non-local continuum, the fractional differential operators have the form of fractional partial derivatives of the Riesz type. As examples of the applications of the suggested lattice fractional vector calculus, we give lattice models with long-range interactions for the fractional Maxwell equations of non-local continuous media and for the fractional generalization of the Mindlin and Aifantis continuum models of gradient elasticity.

  18. Clinical validation of coronal and sagittal spinal curve measurements based on three-dimensional vertebra vector parameters.

    Science.gov (United States)

    Somoskeöy, Szabolcs; Tunyogi-Csapó, Miklós; Bogyó, Csaba; Illés, Tamás

    2012-10-01

    For many decades, visualization and evaluation of three-dimensional (3D) spinal deformities have only been possible by two-dimensional (2D) radiodiagnostic methods, and as a result, characterization and classification were based on 2D terminologies. Recent developments in medical digital imaging and 3D visualization techniques including surface 3D reconstructions opened a chance for a long-sought change in this field. Supported by a 3D Terminology on Spinal Deformities of the Scoliosis Research Society, an approach for 3D measurements and a new 3D classification of scoliosis yielded several compelling concepts on 3D visualization and new proposals for 3D classification in recent years. More recently, a new proposal for visualization and complete 3D evaluation of the spine by 3D vertebra vectors has been introduced by our workgroup, a concept, based on EOS 2D/3D, a groundbreaking new ultralow radiation dose integrated orthopedic imaging device with sterEOS 3D spine reconstruction software. Comparison of accuracy, correlation of measurement values, intraobserver and interrater reliability of methods by conventional manual 2D and vertebra vector-based 3D measurements in a routine clinical setting. Retrospective, nonrandomized study of diagnostic X-ray images created as part of a routine clinical protocol of eligible patients examined at our clinic during a 30-month period between July 2007 and December 2009. In total, 201 individuals (170 females, 31 males; mean age, 19.88 years) including 10 healthy athletes with normal spine and patients with adolescent idiopathic scoliosis (175 cases), adult degenerative scoliosis (11 cases), and Scheuermann hyperkyphosis (5 cases). Overall range of coronal curves was between 2.4 and 117.5°. Analysis of accuracy and reliability of measurements was carried out on a group of all patients and in subgroups based on coronal plane deviation: 0 to 10° (Group 1; n=36), 10 to 25° (Group 2; n=25), 25 to 50° (Group 3; n=69), 50 to 75

  19. Topological vector spaces and their applications

    CERN Document Server

    Bogachev, V I

    2017-01-01

    This book gives a compact exposition of the fundamentals of the theory of locally convex topological vector spaces. Furthermore it contains a survey of the most important results of a more subtle nature, which cannot be regarded as basic, but knowledge which is useful for understanding applications. Finally, the book explores some of such applications connected with differential calculus and measure theory in infinite-dimensional spaces. These applications are a central aspect of the book, which is why it is different from the wide range of existing texts on topological vector spaces. In addition, this book develops differential and integral calculus on infinite-dimensional locally convex spaces by using methods and techniques of the theory of locally convex spaces. The target readership includes mathematicians and physicists whose research is related to infinite-dimensional analysis.

  20. Calculus with vectors

    CERN Document Server

    Treiman, Jay S

    2014-01-01

    Calculus with Vectors grew out of a strong need for a beginning calculus textbook for undergraduates who intend to pursue careers in STEM. fields. The approach introduces vector-valued functions from the start, emphasizing the connections between one-variable and multi-variable calculus. The text includes early vectors and early transcendentals and includes a rigorous but informal approach to vectors. Examples and focused applications are well presented along with an abundance of motivating exercises. All three-dimensional graphs have rotatable versions included as extra source materials and may be freely downloaded and manipulated with Maple Player; a free Maple Player App is available for the iPad on iTunes. The approaches taken to topics such as the derivation of the derivatives of sine and cosine, the approach to limits, and the use of "tables" of integration have been modified from the standards seen in other textbooks in order to maximize the ease with which students may comprehend the material. Additio...

  1. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  2. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    Science.gov (United States)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  3. Application of support vector machine to three-dimensional shape-based virtual screening using comprehensive three-dimensional molecular shape overlay with known inhibitors.

    Science.gov (United States)

    Sato, Tomohiro; Yuki, Hitomi; Takaya, Daisuke; Sasaki, Shunta; Tanaka, Akiko; Honma, Teruki

    2012-04-23

    In this study, machine learning using support vector machine was combined with three-dimensional (3D) molecular shape overlay, to improve the screening efficiency. Since the 3D molecular shape overlay does not use fingerprints or descriptors to compare two compounds, unlike 2D similarity methods, the application of machine learning to a 3D shape-based method has not been extensively investigated. The 3D similarity profile of a compound is defined as the array of 3D shape similarities with multiple known active compounds of the target protein and is used as the explanatory variable of support vector machine. As the measures of 3D shape similarity for our new prediction models, the prediction performances of the 3D shape similarity metrics implemented in ROCS, such as ShapeTanimoto and ScaledColor, were validated, using the known inhibitors of 15 target proteins derived from the ChEMBL database. The learning models based on the 3D similarity profiles stably outperformed the original ROCS when more than 10 known inhibitors were available as the queries. The results demonstrated the advantages of combining machine learning with the 3D similarity profile to process the 3D shape information of plural active compounds.

  4. Stationary closed strings in five-dimensional flat spacetime

    Science.gov (United States)

    Igata, Takahisa; Ishihara, Hideki; Nishiwaki, Keisuke

    2012-11-01

    We investigate stationary rotating closed Nambu-Goto strings in five-dimensional flat spacetime. The stationary string is defined as a world sheet that is tangent to a timelike Killing vector. The Nambu-Goto equation of motion for the stationary string is reduced to the geodesic equation on the orbit space of the isometry group action generated by the Killing vector. We take a linear combination of a time-translation vector and space-rotation vectors as the Killing vector, and explicitly construct general solutions of stationary rotating closed strings in five-dimensional flat spacetime. We show a variety of their configurations and properties.

  5. High energy beta rays and vectors of Bilharzia and Fasciola

    International Nuclear Information System (INIS)

    Fletcher, J.J.; Akpa, T.C.; Dim, L.A.; Ogunsusi, R.

    1988-01-01

    Preliminary investigations of the effects of high energy beta rays on Lymnea natalensis, the snail vector of Schistosoma haematobium have been conducted. Results show that in both stream and tap water, about 70% of the snails die when irradiated for up to 18 hours using a 15m Ci Sr-90 beta source. The rest of the snails die without further irradiation in 24 hours. It may then be possible to control the vectors of Bilharzia and Fasciola by using both the direct and indirect effects of high energy betas. (author)

  6. High energy beta rays and vectors of Bilharzia and Fasciola

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, J.J.; Akpa, T.C.; Dim, L.A.; Ogunsusi, R.

    1988-01-01

    Preliminary investigations of the effects of high energy beta rays on Lymnea natalensis, the snail vector of Schistosoma haematobium have been conducted. Results show that in both stream and tap water, about 70% of the snails die when irradiated for up to 18 hours using a 15m Ci Sr-90 beta source. The rest of the snails die without further irradiation in 24 hours. It may then be possible to control the vectors of Bilharzia and Fasciola by using both the direct and indirect effects of high energy betas.

  7. General projective relativity and the vector-tensor gravitational field

    International Nuclear Information System (INIS)

    Arcidiacono, G.

    1986-01-01

    In the general projective relativity, the induced 4-dimensional metric is symmetric in three cases, and we obtain the vector-tensor, the scalar-tensor, and the scalar-vector-tensor theories of gravitation. In this work we examine the vector-tensor theory, similar to the Veblen's theory, but with a different physical interpretation

  8. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (vectorization). Progress report fiscal 1997

    International Nuclear Information System (INIS)

    Kawasaki, Nobuo; Ogasawara, Shinobu; Adachi, Masaaki; Kume, Etsuo; Ishizuki, Shigeru; Tanabe, Hidenobu; Nemoto, Toshiyuki; Kawai, Wataru; Watanabe, Hideo

    1999-05-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system and/or the AP3000 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 14 codes in fiscal 1997. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the vectorization. In this vectorization part, the vectorization of multidimensional two-fluid model code ACE-3D for evaluation of constitutive equations, statistical decay code SD and three-dimensional thermal analysis code for in-core test section (T2) of HENDEL SSPHEAT are described. In the parallelization part, the parallelization of cylindrical direct numerical simulation code CYLDNS44N, worldwide version of system for prediction of environmental emergency dose information code WSPEEDI, extension of quantum molecular dynamics code EQMD and three-dimensional non-steady compressible fluid dynamics code STREAM are described. In the porting part, the porting of transient reactor analysis code TRAC-BF1 and Monte Carlo radiation transport code MCNP4A on the AP3000 are described. In addition, a modification of program libraries for command-driven interactive data analysis plotting program IPLOT is described. (author)

  9. Efficient modeling of vector hysteresis using fuzzy inference systems

    International Nuclear Information System (INIS)

    Adly, A.A.; Abd-El-Hafiz, S.K.

    2008-01-01

    Vector hysteresis models have always been regarded as important tools to determine which multi-dimensional magnetic field-media interactions may be predicted. In the past, considerable efforts have been focused on mathematical modeling methodologies of vector hysteresis. This paper presents an efficient approach based upon fuzzy inference systems for modeling vector hysteresis. Computational efficiency of the proposed approach stems from the fact that the basic non-local memory Preisach-type hysteresis model is approximated by a local memory model. The proposed computational low-cost methodology can be easily integrated in field calculation packages involving massive multi-dimensional discretizations. Details of the modeling methodology and its experimental testing are presented

  10. Vorticity vector-potential method based on time-dependent curvilinear coordinates for two-dimensional rotating flows in closed configurations

    Science.gov (United States)

    Fu, Yuan; Zhang, Da-peng; Xie, Xi-lin

    2018-04-01

    In this study, a vorticity vector-potential method for two-dimensional viscous incompressible rotating driven flows is developed in the time-dependent curvilinear coordinates. The method is applicable in both inertial and non-inertial frames of reference with the advantage of a fixed and regular calculation domain. The numerical method is applied to triangle and curved triangle configurations in constant and varying rotational angular velocity cases respectively. The evolutions of flow field are studied. The geostrophic effect, unsteady effect and curvature effect on the evolutions are discussed.

  11. High-quality and interactive animations of 3D time-varying vector fields.

    Science.gov (United States)

    Helgeland, Anders; Elboth, Thomas

    2006-01-01

    In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.

  12. On the uncertainty relations for vector-valued operators

    International Nuclear Information System (INIS)

    Chistyakov, A.L.

    1976-01-01

    In analogy with the expression for the Heisenberg incertainty principle in terms of dispersions by means of the Weyl inequality, in the case of one-dimensional quantum mechanical quantities, the principle for many-dimensional quantities can be expressed in terms of generalized dispersions and covariance matrices by means of inequalities similar to the Weyl unequality. The proofs of these inequalities are given in an abstract form, not only for the physical vector quantities, but also for arbitrary vector-valued operators with commuting self-adjoint components

  13. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  14. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  15. Three-dimensional tumor spheroids for in vitro analysis of bacteria as gene delivery vectors in tumor therapy.

    Science.gov (United States)

    Osswald, Annika; Sun, Zhongke; Grimm, Verena; Ampem, Grace; Riegel, Karin; Westendorf, Astrid M; Sommergruber, Wolfgang; Otte, Kerstin; Dürre, Peter; Riedel, Christian U

    2015-12-12

    Several studies in animal models demonstrated that obligate and facultative anaerobic bacteria of the genera Bifidobacterium, Salmonella, or Clostridium specifically colonize solid tumors. Consequently, these and other bacteria are discussed as live vectors to deliver therapeutic genes to inhibit tumor growth. Therapeutic approaches for cancer treatment using anaerobic bacteria have been investigated in different mouse models. In the present study, solid three-dimensional (3D) multicellular tumor spheroids (MCTS) of the colorectal adenocarcinoma cell line HT-29 were generated and tested for their potential to study prodrug-converting enzyme therapies using bacterial vectors in vitro. HT-29 MCTS resembled solid tumors displaying all relevant features with an outer zone of proliferating cells and hypoxic and apoptotic regions in the core. Upon incubation with HT-29 MCTS, Bifidobacterium bifidum S17 and Salmonella typhimurium YB1 selectively localized, survived and replicated in hypoxic areas inside MCTS. Furthermore, spores of the obligate anaerobe Clostridium sporogenes germinated in these hypoxic areas. To further evaluate the potential of MCTS to investigate therapeutic approaches using bacteria as gene delivery vectors, recombinant bifidobacteria expressing prodrug-converting enzymes were used. Expression of a secreted cytosine deaminase in combination with 5-fluorocytosine had no effect on growth of MCTS due to an intrinsic resistance of HT-29 cells to 5-fluorouracil, i.e. the converted drug. However, a combination of the prodrug CB1954 and a strain expressing a secreted chromate reductase effectively inhibited MCTS growth. Collectively, the presented results indicate that MCTS are a suitable and reliable model to investigate live bacteria as gene delivery vectors for cancer therapy in vitro.

  16. Page segmentation using script identification vectors: A first look

    Energy Technology Data Exchange (ETDEWEB)

    Hochberg, J.; Cannon, M.; Kelly, P.; White, J.

    1997-07-01

    Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green, and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.

  17. Intraoperative Vector Flow Imaging of the Heart

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Lindskov; Møller-Sørensen, Hasse; Pedersen, Mads Møller

    2013-01-01

    The cardiac flow is complex and multidirectional, and difficult to measure with conventional Doppler ultrasound (US) methods due to the one-dimensional and angle-dependent velocity estimation. The vector velocity method Transverse Oscillation (TO) has been proposed as a solution to this. TO is im......The cardiac flow is complex and multidirectional, and difficult to measure with conventional Doppler ultrasound (US) methods due to the one-dimensional and angle-dependent velocity estimation. The vector velocity method Transverse Oscillation (TO) has been proposed as a solution to this....... TO is implemented on a conventional US scanner (Pro Focus 2202 UltraView, BK Medical) using a linear transducer (8670, BK Medical) and can provide real-time, angle-independent vector velocity estimates of the cardiac blood flow. During cardiac surgery, epicardiac US examinations using TO were performed on three...

  18. Estimation of vector velocity

    DEFF Research Database (Denmark)

    2000-01-01

    Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...

  19. Geminivirus vectors for high-level expression of foreign proteins in plant cells.

    Science.gov (United States)

    Mor, Tsafrir S; Moon, Yong-Sun; Palmer, Kenneth E; Mason, Hugh S

    2003-02-20

    Bean yellow dwarf virus (BeYDV) is a monopartite geminivirus that can infect dicotyledonous plants. We have developed a high-level expression system that utilizes elements of the replication machinery of this single-stranded DNA virus. The replication initiator protein (Rep) mediates release and replication of a replicon from a DNA construct ("LSL vector") that contains an expression cassette for a gene of interest flanked by cis-acting elements of the virus. We used tobacco NT1 cells and biolistic delivery of plasmid DNA for evaluation of replication and expression of reporter genes contained within an LSL vector. By codelivery of a GUS reporter-LSL vector and a Rep-supplying vector, we obtained up to 40-fold increase in expression levels compared to delivery of the reporter-LSL vectors alone. High-copy replication of the LSL vector was correlated with enhanced expression of GUS. Rep expression using a whole BeYDV clone, a cauliflower mosaic virus 35S promoter driving either genomic rep or an intron-deleted rep gene, or 35S-rep contained in the LSL vector all achieved efficient replication and enhancement of GUS expression. We anticipate that this system can be adapted for use in transgenic plants or plant cell cultures with appropriately regulated expression of Rep, with the potential to greatly increase yield of recombinant proteins. Copyright 2003 Wiley Periodicals, Inc. Biotechnol Bioeng 81: 430-437, 2003.

  20. A lower dimensional feature vector for identification of partial discharges of different origin using time measurements

    International Nuclear Information System (INIS)

    Evagorou, Demetres; Kyprianou, Andreas; Georghiou, George E; Lewin, Paul L; Stavrou, Andreas

    2012-01-01

    Partial discharge (PD) classification into sources of different origin is essential in evaluating the severity of the damage caused by its activity on the insulation of power cables and their accessories. More specifically, some types of PD can be classified as having a detrimental effect on the integrity of the insulation while others can be deemed relatively harmless, rendering the correct classification of different PD types of vital importance to electrical utilities. In this work, a feature vector was proposed based on higher order statistics on selected nodes of the wavelet packet transform (WPT) coefficients of time domain measurements, which can compactly represent the characteristics of different PD sources. To assess its performance, experimental data acquired under laboratory conditions for four different PD sources encountered in power systems were used. The two learning machine methods, namely the support vector machine and the probabilistic neural network, employed as the classification algorithms, achieved overall classification rates of around 98%. In comparison, the utilization of the scaled, raw WPT coefficients as a feature vector resulted in classification accuracy of around 99%, but with a significantly higher number of dimensions (1304 to 16), validating the PD identification ability of the proposed feature. Dimensionality reduction becomes a key factor in online, real-time data collection and processing of PD measurements, reducing the classification effort and the data-storage requirements. Therefore, the proposed method can constitute a potential tool for such online measurements, after addressing issues related to on-site measurements such as the rejection of interference. (paper)

  1. Graphene materials as 2D non-viral gene transfer vector platforms.

    Science.gov (United States)

    Vincent, M; de Lázaro, I; Kostarelos, K

    2017-03-01

    Advances in genomics and gene therapy could offer solutions to many diseases that remain incurable today, however, one of the critical reasons halting clinical progress is due to the difficulty in designing efficient and safe delivery vectors for the appropriate genetic cargo. Safety and large-scale production concerns counter-balance the high gene transfer efficiency achieved with viral vectors, while non-viral strategies have yet to become sufficiently efficient. The extraordinary physicochemical, optical and photothermal properties of graphene-based materials (GBMs) could offer two-dimensional components for the design of nucleic acid carrier systems. We discuss here such properties and their implications for the optimization of gene delivery. While the design of such vectors is still in its infancy, we provide here an exhaustive and up-to-date analysis of the studies that have explored GBMs as gene transfer vectors, focusing on the functionalization strategies followed to improve vector performance and on the biological effects attained.

  2. Highly conducting one-dimensional solids

    CERN Document Server

    Evrard, Roger; Doren, Victor

    1979-01-01

    Although the problem of a metal in one dimension has long been known to solid-state physicists, it was not until the synthesis of real one-dimensional or quasi-one-dimensional systems that this subject began to attract considerable attention. This has been due in part to the search for high­ temperature superconductivity and the possibility of reaching this goal with quasi-one-dimensional substances. A period of intense activity began in 1973 with the report of a measurement of an apparently divergent conduc­ tivity peak in TfF-TCNQ. Since then a great deal has been learned about quasi-one-dimensional conductors. The emphasis now has shifted from trying to find materials of very high conductivity to the many interesting problems of physics and chemistry involved. But many questions remain open and are still under active investigation. This book gives a review of the experimental as well as theoretical progress made in this field over the last years. All the chapters have been written by scientists who have ...

  3. High-Performance Matrix-Vector Multiplication on the GPU

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    In this paper, we develop a high-performance GPU kernel for one of the most popular dense linear algebra operations, the matrix-vector multiplication. The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture), which is designed from the ground up for scientific computing...

  4. Applications of the Local Algebras of Vector Fields to the Modelling of Physical Phenomena

    OpenAIRE

    Bayak, Igor V.

    2015-01-01

    In this paper we discuss the local algebras of linear vector fields that can be used in the mathematical modelling of physical space by building the dynamical flows of vector fields on eight-dimensional cylindrical or toroidal manifolds. It is shown that the topological features of the vector fields obey the Dirac equation when moving freely within the surface of a pseudo-sphere in the eight-dimensional pseudo-Euclidean space.

  5. Vectorized and multitasked solution of the few-group neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, S.K.; Turinsky, P.J.; Shayer, Z.

    1989-01-01

    A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. For the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model

  6. Calculating vibrational spectra with sum of product basis functions without storing full-dimensional vectors or matrices.

    Science.gov (United States)

    Leclerc, Arnaud; Carrington, Tucker

    2014-05-07

    We propose an iterative method for computing vibrational spectra that significantly reduces the memory cost of calculations. It uses a direct product primitive basis, but does not require storing vectors with as many components as there are product basis functions. Wavefunctions are represented in a basis each of whose functions is a sum of products (SOP) and the factorizable structure of the Hamiltonian is exploited. If the factors of the SOP basis functions are properly chosen, wavefunctions are linear combinations of a small number of SOP basis functions. The SOP basis functions are generated using a shifted block power method. The factors are refined with a rank reduction algorithm to cap the number of terms in a SOP basis function. The ideas are tested on a 20-D model Hamiltonian and a realistic CH3CN (12 dimensional) potential. For the 20-D problem, to use a standard direct product iterative approach one would need to store vectors with about 10(20) components and would hence require about 8 × 10(11) GB. With the approach of this paper only 1 GB of memory is necessary. Results for CH3CN agree well with those of a previous calculation on the same potential.

  7. Vector boson excitations near deconfined quantum critical points.

    Science.gov (United States)

    Huh, Yejin; Strack, Philipp; Sachdev, Subir

    2013-10-18

    We show that the Néel states of two-dimensional antiferromagnets have low energy vector boson excitations in the vicinity of deconfined quantum critical points. We compute the universal damping of these excitations arising from spin-wave emission. Detection of such a vector boson will demonstrate the existence of emergent topological gauge excitations in a quantum spin system.

  8. An Underwater Acoustic Vector Sensor with High Sensitivity and Broad Band

    Directory of Open Access Journals (Sweden)

    Hu Zhang

    2014-05-01

    Full Text Available Recently, acoustic vector sensor that use accelerators as sensing elements are widely used in underwater acoustic engineering, but the sensitivity of which at low frequency band is usually lower than -220 dB. In this paper, using a piezoelectric trilaminar optimized low frequency sensing element, we designed a high sensitivity internal placed ICP piezoelectric accelerometer as sensing element. Through structure optimization, we made a high sensitivity, broadband, small scale vector sensor. The working band is 10-2000 Hz, sound pressure sensitivity is -185 dB (at 100 Hz, outer diameter is 42 mm, length is 80 mm.

  9. High-dimensional change-point estimation: Combining filtering with convex optimization

    OpenAIRE

    Soh, Yong Sheng; Chandrasekaran, Venkat

    2017-01-01

    We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...

  10. Manipulation of dielectric Rayleigh particles using highly focused elliptically polarized vector fields.

    Science.gov (United States)

    Gu, Bing; Xu, Danfeng; Rui, Guanghao; Lian, Meng; Cui, Yiping; Zhan, Qiwen

    2015-09-20

    Generation of vectorial optical fields with arbitrary polarization distribution is of great interest in areas where exotic optical fields are desired. In this work, we experimentally demonstrate the versatile generation of linearly polarized vector fields, elliptically polarized vector fields, and circularly polarized vortex beams through introducing attenuators in a common-path interferometer. By means of Richards-Wolf vectorial diffraction method, the characteristics of the highly focused elliptically polarized vector fields are studied. The optical force and torque on a dielectric Rayleigh particle produced by these tightly focused vector fields are calculated and exploited for the stable trapping of dielectric Rayleigh particles. It is shown that the additional degree of freedom provided by the elliptically polarized vector field allows one to control the spatial structure of polarization, to engineer the focusing field, and to tailor the optical force and torque on a dielectric Rayleigh particle.

  11. A dynamic counterpart of Lamb vector in viscous compressible aerodynamics

    International Nuclear Information System (INIS)

    Liu, L Q; Wu, J Z; Shi, Y P; Zhu, J Y

    2014-01-01

    The Lamb vector is known to play a key role in incompressible fluid dynamics and vortex dynamics. In particular, in low-speed steady aerodynamics it is solely responsible for the total force acting on a moving body, known as the vortex force, with the classic two-dimensional (exact) Kutta–Joukowski theorem and three-dimensional (linearized) lifting-line theory as the most famous special applications. In this paper we identify an innovative dynamic counterpart of the Lamb vector in viscous compressible aerodynamics, which we call the compressible Lamb vector. Mathematically, we present a theorem on the dynamic far-field decay law of the vorticity and dilatation fields, and thereby prove that the generalized Lamb vector enjoys exactly the same integral properties as the Lamb vector does in incompressible flow, and hence the vortex-force theory can be generalized to compressible flow with exactly the same general formulation. Moreover, for steady flow of polytropic gas, we show that physically the force exerted on a moving body by the gas consists of a transverse force produced by the original Lamb vector and a new longitudinal force that reflects the effects of compression and irreversible thermodynamics. (paper)

  12. The validation and assessment of machine learning: a game of prediction from high-dimensional data.

    Directory of Open Access Journals (Sweden)

    Tune H Pers

    Full Text Available In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often implies that multiple methods are tested and compared on the same set of data. This is particularly difficult in situations that are prone to over-fitting where the number of subjects is low compared to the number of potential predictors. The article presents a game which provides some grounds for conducting a fair model comparison. Each player selects a modeling strategy for predicting individual response from potential predictors. A strictly proper scoring rule, bootstrap cross-validation, and a set of rules are used to make the results obtained with different strategies comparable. To illustrate the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively.

  13. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  14. High dimensional neurocomputing growth, appraisal and applications

    CERN Document Server

    Tripathi, Bipin Kumar

    2015-01-01

    The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...

  15. High-energy manifestations of heavy quarks in axial-vector neutral currents

    International Nuclear Information System (INIS)

    Kizukuri, Y.; Ohba, I.; Okano, K.; Yamanaka, Y.

    1981-01-01

    A recent work by Collins, Wilczek, and Zee has attempted to manifest the incompleteness of the decoupling theorem in the axial-vector neutral currents at low energies. In the spirit of their work, we calculate corrections of the axial-vector neutral currents by virtual-heavy-quark exchange in the high-energy e + e - processes and estimate some observable quantities sensitive to virtual-heavy-quark masses which may be compared with experimental data at LEP energies

  16. Embedding of attitude determination in n-dimensional spaces

    Science.gov (United States)

    Bar-Itzhack, Itzhack Y.; Markley, F. Landis

    1988-01-01

    The problem of attitude determination in n-dimensional spaces is addressed. The proper parameters are found, and it is shown that not all three-dimensional methods have useful extensions to higher dimensions. It is demonstrated that Rodriguez parameters are conveniently extendable to other dimensions. An algorithm for using these parameters in the general n-dimensional case is developed and tested with a four-dimensional example. The correct mathematical description of angular velocities is addressed, showing that angular velocity in n dimensions cannot be represented by a vector but rather by a tensor of the second rank. Only in three dimensions can the angular velocity be described by a vector.

  17. Generation of High-order Group-velocity-locked Vector Solitons

    OpenAIRE

    Jin, X. X.; Wu, Z. C.; Zhang, Q.; Li, L.; Tang, D. Y.; Shen, D. Y.; Fu, S. N.; Liu, D. M.; Zhao, L. M.

    2015-01-01

    We report numerical simulations on the high-order group-velocity-locked vector soliton (GVLVS) generation based on the fundamental GVLVS. The high-order GVLVS generated is characterized with a two-humped pulse along one polarization while a single-humped pulse along the orthogonal polarization. The phase difference between the two humps could be 180 degree. It is found that by appropriate setting the time separation between the two components of the fundamental GVLVS, the high-order GVLVS wit...

  18. Killing vector fields in three dimensions: a method to solve massive gravity field equations

    Energy Technology Data Exchange (ETDEWEB)

    Guerses, Metin, E-mail: gurses@fen.bilkent.edu.t [Department of Mathematics, Faculty of Sciences, Bilkent University, 06800 Ankara (Turkey)

    2010-10-21

    Killing vector fields in three dimensions play an important role in the construction of the related spacetime geometry. In this work we show that when a three-dimensional geometry admits a Killing vector field then the Ricci tensor of the geometry is determined in terms of the Killing vector field and its scalars. In this way we can generate all products and covariant derivatives at any order of the Ricci tensor. Using this property we give ways to solve the field equations of topologically massive gravity (TMG) and new massive gravity (NMG) introduced recently. In particular when the scalars of the Killing vector field (timelike, spacelike and null cases) are constants then all three-dimensional symmetric tensors of the geometry, the Ricci and Einstein tensors, their covariant derivatives at all orders, and their products of all orders are completely determined by the Killing vector field and the metric. Hence, the corresponding three-dimensional metrics are strong candidates for solving all higher derivative gravitational field equations in three dimensions.

  19. A family of E. coli expression vectors for laboratory scale and high throughput soluble protein production

    Directory of Open Access Journals (Sweden)

    Bottomley Stephen P

    2006-03-01

    Full Text Available Abstract Background In the past few years, both automated and manual high-throughput protein expression and purification has become an accessible means to rapidly screen and produce soluble proteins for structural and functional studies. However, many of the commercial vectors encoding different solubility tags require different cloning and purification steps for each vector, considerably slowing down expression screening. We have developed a set of E. coli expression vectors with different solubility tags that allow for parallel cloning from a single PCR product and can be purified using the same protocol. Results The set of E. coli expression vectors, encode for either a hexa-histidine tag or the three most commonly used solubility tags (GST, MBP, NusA and all with an N-terminal hexa-histidine sequence. The result is two-fold: the His-tag facilitates purification by immobilised metal affinity chromatography, whilst the fusion domains act primarily as solubility aids during expression, in addition to providing an optional purification step. We have also incorporated a TEV recognition sequence following the solubility tag domain, which allows for highly specific cleavage (using TEV protease of the fusion protein to yield native protein. These vectors are also designed for ligation-independent cloning and they possess a high-level expressing T7 promoter, which is suitable for auto-induction. To validate our vector system, we have cloned four different genes and also one gene into all four vectors and used small-scale expression and purification techniques. We demonstrate that the vectors are capable of high levels of expression and that efficient screening of new proteins can be readily achieved at the laboratory level. Conclusion The result is a set of four rationally designed vectors, which can be used for streamlined cloning, expression and purification of target proteins in the laboratory and have the potential for being adaptable to a high

  20. Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.

    Science.gov (United States)

    Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen

    2017-12-01

    In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.

  1. Strategies to generate high-titer, high-potency recombinant AAV3 serotype vectors

    Directory of Open Access Journals (Sweden)

    Chen Ling

    2016-01-01

    Full Text Available Although recombinant adeno-associated virus serotype 3 (AAV3 vectors were largely ignored previously, owing to their poor transduction efficiency in most cells and tissues examined, our initial observation of the selective tropism of AAV3 serotype vectors for human liver cancer cell lines and primary human hepatocytes has led to renewed interest in this serotype. AAV3 vectors and their variants have recently proven to be extremely efficient in targeting human and nonhuman primate hepatocytes in vitro as well as in vivo. In the present studies, we wished to evaluate the relative contributions of the cis-acting inverted terminal repeats (ITRs from AAV3 (ITR3, as well as the trans-acting Rep proteins from AAV3 (Rep3 in the AAV3 vector production and transduction. To this end, we utilized two helper plasmids: pAAVr2c3, which carries rep2 and cap3 genes, and pAAVr3c3, which carries rep3 and cap3 genes. The combined use of AAV3 ITRs, AAV3 Rep proteins, and AAV3 capsids led to the production of recombinant vectors, AAV3-Rep3/ITR3, with up to approximately two to fourfold higher titers than AAV3-Rep2/ITR2 vectors produced using AAV2 ITRs, AAV2 Rep proteins, and AAV3 capsids. We also observed that the transduction efficiency of Rep3/ITR3 AAV3 vectors was approximately fourfold higher than that of Rep2/ITR2 AAV3 vectors in human hepatocellular carcinoma cell lines in vitro. The transduction efficiency of Rep3/ITR3 vectors was increased by ∼10-fold, when AAV3 capsids containing mutations in two surface-exposed residues (serine 663 and threonine 492 were used to generate a S663V+T492V double-mutant AAV3 vector. The Rep3/ITR3 AAV3 vectors also transduced human liver tumors in vivo approximately twofold more efficiently than those generated with Rep2/ITR2. Our data suggest that the transduction efficiency of AAV3 vectors can be significantly improved both using homologous Rep proteins and ITRs as well as by capsid optimization. Thus, the combined use of

  2. Application of vector CSAMT for the imaging of an active fault; CSAMT ho ni yoru danso no imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, T; Fukuoka, K [Oyo Corp., Tokyo (Japan)

    1997-05-27

    With an objective to identify three-dimensionally resistivity in deep fault in the Mizunawa fault in Fukuoka Prefecture, a measurement was carried out by using the CSAMT method. The measurement was conducted by arranging seven traverse lines, each line having observation points installed at intervals of about 500 m. Among the 68 observation points in total, 33 points performed the vector measurement, and the remaining points the scaler measurement. For observation points having performed the vector measurement, polarized wave eclipses were depicted in the electric field to discuss which direction the current will prevail in. For analyses, a one-dimensional analysis was performed by using an inversion with smoothing restriction, and a two-dimensional analysis was conducted by using the finite element method based on the result of the former analysis. The vector measurement revealed that the structure in the vicinity of a fault was estimated to have become complex, and the two-dimensional analysis discovered that the Mizunawa fault is located on a relatively clear resistivity boundary. In addition, it was made clear that the high resistivity band may even be divided into two regions of about 200 ohm-m and about 1000 ohm-m. 2 refs., 7 figs.

  3. Hybrid three-dimensional and support vector machine approach for automatic vehicle tracking and classification using a single camera

    Science.gov (United States)

    Kachach, Redouane; Cañas, José María

    2016-05-01

    Using video in traffic monitoring is one of the most active research domains in the computer vision community. TrafficMonitor, a system that employs a hybrid approach for automatic vehicle tracking and classification on highways using a simple stationary calibrated camera, is presented. The proposed system consists of three modules: vehicle detection, vehicle tracking, and vehicle classification. Moving vehicles are detected by an enhanced Gaussian mixture model background estimation algorithm. The design includes a technique to resolve the occlusion problem by using a combination of two-dimensional proximity tracking algorithm and the Kanade-Lucas-Tomasi feature tracking algorithm. The last module classifies the shapes identified into five vehicle categories: motorcycle, car, van, bus, and truck by using three-dimensional templates and an algorithm based on histogram of oriented gradients and the support vector machine classifier. Several experiments have been performed using both real and simulated traffic in order to validate the system. The experiments were conducted on GRAM-RTM dataset and a proper real video dataset which is made publicly available as part of this work.

  4. High Accuracy Vector Helium Magnetometer

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed HAVHM instrument is a laser-pumped helium magnetometer with both triaxial vector and omnidirectional scalar measurement capabilities in a single...

  5. Fetal muscle gene transfer is not enhanced by an RGD capsid modification to high-capacity adenoviral vectors.

    Science.gov (United States)

    Bilbao, R; Reay, D P; Hughes, T; Biermann, V; Volpers, C; Goldberg, L; Bergelson, J; Kochanek, S; Clemens, P R

    2003-10-01

    High levels of alpha(v) integrin expression by fetal muscle suggested that vector re-targeting to integrins could enhance adenoviral vector-mediated transduction, thereby increasing safety and efficacy of muscle gene transfer in utero. High-capacity adenoviral (HC-Ad) vectors modified by an Arg-Gly-Asp (RGD) peptide motif in the HI loop of the adenoviral fiber (RGD-HC-Ad) have demonstrated efficient gene transfer through binding to alpha(v) integrins. To test integrin targeting of HC-Ad vectors for fetal muscle gene transfer, we compared unmodified and RGD-modified HC-Ad vectors. In vivo, unmodified HC-Ad vector transduced fetal mouse muscle with four-fold higher efficiency compared to RGD-HC-Ad vector. Confirming that the difference was due to muscle cell autonomous factors and not mechanical barriers, transduction of primary myogenic cells isolated from murine fetal muscle in vitro demonstrated a three-fold better transduction by HC-Ad vector than by RGD-HC-Ad vector. We hypothesized that the high expression level of coxsackievirus and adenovirus receptor (CAR), demonstrated in fetal muscle cells both in vitro and in vivo, was the crucial variable influencing the relative transduction efficiencies of HC-Ad and RGD-HC-Ad vectors. To explore this further, we studied transduction by HC-Ad and RGD-HC-Ad vectors in paired cell lines that expressed alpha(v) integrins and differed only by the presence or absence of CAR expression. The results increase our understanding of factors that will be important for retargeting HC-Ad vectors to enhance gene transfer to fetal muscle.

  6. Vectoring of parallel synthetic jets: A parametric study

    Science.gov (United States)

    Berk, Tim; Gomit, Guillaume; Ganapathisubramani, Bharathram

    2016-11-01

    The vectoring of a pair of parallel synthetic jets can be described using five dimensionless parameters: the aspect ratio of the slots, the Strouhal number, the Reynolds number, the phase difference between the jets and the spacing between the slots. In the present study, the influence of the latter four on the vectoring behaviour of the jets is examined experimentally using particle image velocimetry. Time-averaged velocity maps are used to study the variations in vectoring behaviour for a parametric sweep of each of the four parameters independently. A topological map is constructed for the full four-dimensional parameter space. The vectoring behaviour is described both qualitatively and quantitatively. A vectoring mechanism is proposed, based on measured vortex positions. We acknowledge the financial support from the European Research Council (ERC Grant Agreement No. 277472).

  7. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  8. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  9. Generalized synthetic aperture radar automatic target recognition by convolutional neural network with joint use of two-dimensional principal component analysis and support vector machine

    Science.gov (United States)

    Zheng, Ce; Jiang, Xue; Liu, Xingzhao

    2017-10-01

    Convolutional neural network (CNN), as a vital part of the deep learning research field, has shown powerful potential for automatic target recognition (ATR) of synthetic aperture radar (SAR). However, the high complexity caused by the deep structure of CNN makes it difficult to generalize. An improved form of CNN with higher generalization capability and less probability of overfitting, which further improves the efficiency and robustness of the SAR ATR system, is proposed. The convolution layers of CNN are combined with a two-dimensional principal component analysis algorithm. Correspondingly, the kernel support vector machine is utilized as the classifier layer instead of the multilayer perceptron. The verification experiments are implemented using the moving and stationary target acquisition and recognition database, and the results validate the efficiency of the proposed method.

  10. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    Science.gov (United States)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  11. Two-Dimensional Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Bo Jia

    2015-01-01

    (BP networks. However, like many other methods, ELM is originally proposed to handle vector pattern while nonvector patterns in real applications need to be explored, such as image data. We propose the two-dimensional extreme learning machine (2DELM based on the very natural idea to deal with matrix data directly. Unlike original ELM which handles vectors, 2DELM take the matrices as input features without vectorization. Empirical studies on several real image datasets show the efficiency and effectiveness of the algorithm.

  12. Multi-SOM: an Algorithm for High-Dimensional, Small Size Datasets

    Directory of Open Access Journals (Sweden)

    Shen Lu

    2013-04-01

    Full Text Available Since it takes time to do experiments in bioinformatics, biological datasets are sometimes small but with high dimensionality. From probability theory, in order to discover knowledge from a set of data, we have to have a sufficient number of samples. Otherwise, the error bounds can become too large to be useful. For the SOM (Self- Organizing Map algorithm, the initial map is based on the training data. In order to avoid the bias caused by the insufficient training data, in this paper we present an algorithm, called Multi-SOM. Multi-SOM builds a number of small self-organizing maps, instead of just one big map. Bayesian decision theory is used to make the final decision among similar neurons on different maps. In this way, we can better ensure that we can get a real random initial weight vector set, the map size is less of consideration and errors tend to average out. In our experiments as applied to microarray datasets which are highly intense data composed of genetic related information, the precision of Multi-SOMs is 10.58% greater than SOMs, and its recall is 11.07% greater than SOMs. Thus, the Multi-SOMs algorithm is practical.

  13. Three-dimensional tori and Arnold tongues

    Energy Technology Data Exchange (ETDEWEB)

    Sekikawa, Munehisa, E-mail: sekikawa@cc.utsunomiya-u.ac.jp [Department of Mechanical and Intelligent Engineering, Utsunomiya University, Utsunomiya-shi 321-8585 (Japan); Inaba, Naohiko [Organization for the Strategic Coordination of Research and Intellectual Property, Meiji University, Kawasaki-shi 214-8571 (Japan); Kamiyama, Kyohei [Department of Electronics and Bioinformatics, Meiji University, Kawasaki-shi 214-8571 (Japan); Aihara, Kazuyuki [Institute of Industrial Science, the University of Tokyo, Meguro-ku 153-8505 (Japan)

    2014-03-15

    This study analyzes an Arnold resonance web, which includes complicated quasi-periodic bifurcations, by conducting a Lyapunov analysis for a coupled delayed logistic map. The map can exhibit a two-dimensional invariant torus (IT), which corresponds to a three-dimensional torus in vector fields. Numerous one-dimensional invariant closed curves (ICCs), which correspond to two-dimensional tori in vector fields, exist in a very complicated but reasonable manner inside an IT-generating region. Periodic solutions emerge at the intersections of two different thin ICC-generating regions, which we call ICC-Arnold tongues, because all three independent-frequency components of the IT become rational at the intersections. Additionally, we observe a significant bifurcation structure where conventional Arnold tongues transit to ICC-Arnold tongues through a Neimark-Sacker bifurcation in the neighborhood of a quasi-periodic Hopf bifurcation (or a quasi-periodic Neimark-Sacker bifurcation) boundary.

  14. The algebra of Killing vectors in five-dimensional space

    International Nuclear Information System (INIS)

    Rcheulishvili, G.L.

    1990-01-01

    This paper presents algebras which are formed by the found earlier Killing vectors in the space with linear element ds. Under some conditions, an explicit dependence of r is given for the functions entering in linear element ds. The curvature two-forms are described. 7 refs

  15. Engineering two-photon high-dimensional states through quantum interference

    Science.gov (United States)

    Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew

    2016-01-01

    Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685

  16. Boundary value problems of holomorphic vector functions in 1D QCs

    International Nuclear Information System (INIS)

    Gao Yang; Zhao Yingtao; Zhao Baosheng

    2007-01-01

    By means of the generalized Stroh formalism, two-dimensional (2D) problems of one-dimensional (1D) quasicrystals (QCs) elasticity are turned into the boundary value problems of holomorphic vector functions in a given region. If the conformal mapping from an ellipse to a circle is known, a general method for solving the boundary value problems of holomorphic vector functions can be presented. To illustrate its utility, by using the necessary and sufficient condition of boundary value problems of holomorphic vector functions, we consider two basic 2D problems in 1D QCs, that is, an elliptic hole and a rigid line inclusion subjected to uniform loading at infinity. For the crack problem, the intensity factors of phonon and phason fields are determined, and the physical sense of the results relative to phason and the difference between mechanical behaviors of the crack problem in crystals and QCs are figured out. Moreover, the same procedure can be used to deal with the elastic problems for 2D and three-dimensional (3D) QCs

  17. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  18. Command vector memory systems: high performance at low cost

    OpenAIRE

    Corbal San Adrián, Jesús; Espasa Sans, Roger; Valero Cortés, Mateo

    1998-01-01

    The focus of this paper is on designing both a low cost and high performance, high bandwidth vector memory system that takes advantage of modern commodity SDRAM memory chips. To successfully extract the full bandwidth from SDRAM parts, we propose a new memory system organization based on sending commands to the memory system as opposed to sending individual addresses. A command specifies, in a few bytes, a request for multiple independent memory words. A command is similar to a burst found in...

  19. High dimensional entanglement

    CSIR Research Space (South Africa)

    Mc

    2012-07-01

    Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...

  20. Vectorization of nuclear codes for atmospheric transport and exposure calculation of radioactive materials

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Shinozawa, Naohisa; Ishikawa, Hirohiko; Chino, Masamichi; Hayashi, Takashi

    1983-02-01

    Three computer codes MATHEW, ADPIC of LLNL and GAMPUL of JAERI for prediction of wind field, concentration and external exposure rate of airborne radioactive materials are vectorized and the results are presented. Using the continuous equation of incompressible flow as a constraint, the MATHEW calculates the three dimensional wind field by a variational method. Using the particle-in -cell method, the ADPIC calculates the advection and diffusion of radioactive materials in three dimensional wind field and terrain, and gives the concentration of the materials in each cell of the domain. The GAMPUL calculates the external exposure rate assuming Gaussian plume type distribution of concentration. The vectorized code MATHEW attained 7.8 times speedup by a vector processor FACOM230-75 APU. The ADPIC and GAMPUL are estimated to attain 1.5 and 4 times speedup respectively on CRAY-1 type vector processor. (author)

  1. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    , current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI...

  2. Temporal aggregation in first order cointegrated vector autoregressive

    DEFF Research Database (Denmark)

    la Cour, Lisbeth Funding; Milhøj, Anders

    2006-01-01

    We study aggregation - or sample frequencies - of time series, e.g. aggregation from weekly to monthly or quarterly time series. Aggregation usually gives shorter time series but spurious phenomena, in e.g. daily observations, can on the other hand be avoided. An important issue is the effect of ...... of aggregation on the adjustment coefficient in cointegrated systems. We study only first order vector autoregressive processes for n dimensional time series Xt, and we illustrate the theory by a two dimensional and a four dimensional model for prices of various grades of gasoline....

  3. Support Vector Machines for Hyperspectral Remote Sensing Classification

    Science.gov (United States)

    Gualtieri, J. Anthony; Cromp, R. F.

    1998-01-01

    The Support Vector Machine provides a new way to design classification algorithms which learn from examples (supervised learning) and generalize when applied to new data. We demonstrate its success on a difficult classification problem from hyperspectral remote sensing, where we obtain performances of 96%, and 87% correct for a 4 class problem, and a 16 class problem respectively. These results are somewhat better than other recent results on the same data. A key feature of this classifier is its ability to use high-dimensional data without the usual recourse to a feature selection step to reduce the dimensionality of the data. For this application, this is important, as hyperspectral data consists of several hundred contiguous spectral channels for each exemplar. We provide an introduction to this new approach, and demonstrate its application to classification of an agriculture scene.

  4. Derivatives, forms and vector fields on the κ-deformed Euclidean space

    International Nuclear Information System (INIS)

    Dimitrijevic, Marija; Moeller, Lutz; Tsouchnika, Efrossini

    2004-01-01

    The model of κ-deformed space is an interesting example of a noncommutative space, since it allows a deformed symmetry. In this paper, we present new results concerning different sets of derivatives on the coordinate algebra of κ-deformed Euclidean space. We introduce a differential calculus with two interesting sets of one-forms and higher-order forms. The transformation law of vector fields is constructed in accordance with the transformation behaviour of derivatives. The crucial property of the different derivatives, forms and vector fields is that in an n-dimensional spacetime there are always n of them. This is the key difference with respect to conventional approaches, in which the differential calculus is (n + 1)-dimensional. This work shows that derivative-valued quantities such as derivative-valued vector fields appear in a generic way on noncommutative spaces

  5. Light Higgs and vector-like quarks without prejudice

    Science.gov (United States)

    Fajfer, Svjetlana; Greljo, Admir; Kamenik, Jernej F.; Mustać, Ivana

    2013-07-01

    Light vector-like quarks with non-renormalizable couplings to the Higgs are a common feature of models trying to address the electroweak (EW) hierarchy problem by treating the Higgs as a pseudo-goldstone boson of a global (approximate) symmetry. We systematically investigate the implications of the leading dimension five operators on Higgs phenomenology in presence of dynamical up- and down-type weak singlet as well as weak doublet vector-like quarks. After taking into account constraints from precision EW and flavour observables we show that contrary to the renormalizable models, significant modifications of Higgs properties are still possible and could shed light on the role of vector-like quarks in solutions to the EW hierarchy problem. We also briefly discuss implications of higher dimensional operators for direct vector-like quark searches at the LHC.

  6. High speed VLSI neural network for high energy physics

    NARCIS (Netherlands)

    Masa, P.; Masa, P.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    A CMOS neural network IC is discussed which was designed for very high speed applications. The parallel architecture, analog computing and digital weight storage provides unprecedented computing speed combined with ease of use. The circuit classifies up to 70 dimensional vectors within 20

  7. Vector field statistical analysis of kinematic and force trajectories.

    Science.gov (United States)

    Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos

    2013-09-27

    When investigating the dynamics of three-dimensional multi-body biomechanical systems it is often difficult to derive spatiotemporally directed predictions regarding experimentally induced effects. A paradigm of 'non-directed' hypothesis testing has emerged in the literature as a result. Non-directed analyses typically consist of ad hoc scalar extraction, an approach which substantially simplifies the original, highly multivariate datasets (many time points, many vector components). This paper describes a commensurately multivariate method as an alternative to scalar extraction. The method, called 'statistical parametric mapping' (SPM), uses random field theory to objectively identify field regions which co-vary significantly with the experimental design. We compared SPM to scalar extraction by re-analyzing three publicly available datasets: 3D knee kinematics, a ten-muscle force system, and 3D ground reaction forces. Scalar extraction was found to bias the analyses of all three datasets by failing to consider sufficient portions of the dataset, and/or by failing to consider covariance amongst vector components. SPM overcame both problems by conducting hypothesis testing at the (massively multivariate) vector trajectory level, with random field corrections simultaneously accounting for temporal correlation and vector covariance. While SPM has been widely demonstrated to be effective for analyzing 3D scalar fields, the current results are the first to demonstrate its effectiveness for 1D vector field analysis. It was concluded that SPM offers a generalized, statistically comprehensive solution to scalar extraction's over-simplification of vector trajectories, thereby making it useful for objectively guiding analyses of complex biomechanical systems. © 2013 Published by Elsevier Ltd. All rights reserved.

  8. Clustering high dimensional data using RIA

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.

  9. Exact solutions of the vacuum Einstein's equations allowing for two noncommuting Killing vectors

    International Nuclear Information System (INIS)

    Aliev, V.N.; Leznov, A.N.

    1990-01-01

    Einstein's equations are written in the form of covariant gauge theory in two-dimensional space with binomial solvable gauge group, with respect to two noncommutative of Killing vectors. The theory is exact integrable in one-dimensional case and series of partial exact solutions are constructed in two-dimensional. 5 refs

  10. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  11. Geometric Representations of Condition Queries on Three-Dimensional Vector Fields

    Science.gov (United States)

    Henze, Chris

    1999-01-01

    Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.

  12. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    Science.gov (United States)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  13. Application of kinetic flux vector splitting scheme for solving multi-dimensional hydrodynamical models of semiconductor devices

    Science.gov (United States)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  14. Extended standard vector analysis for plasma physics

    International Nuclear Information System (INIS)

    Wimmel, H.K.

    1982-02-01

    Standard vector analysis in 3-dimensional space, as found in most tables and textbooks, is complemented by a number of basic formulas that seem to be largely unknown, but are important in themselves and for some plasma physics applications, as is shown by several examples. (orig.)

  15. An introduction to vectors, vector operators and vector analysis

    CERN Document Server

    Joag, Pramod S

    2016-01-01

    Ideal for undergraduate and graduate students of science and engineering, this book covers fundamental concepts of vectors and their applications in a single volume. The first unit deals with basic formulation, both conceptual and theoretical. It discusses applications of algebraic operations, Levi-Civita notation, and curvilinear coordinate systems like spherical polar and parabolic systems and structures, and analytical geometry of curves and surfaces. The second unit delves into the algebra of operators and their types and also explains the equivalence between the algebra of vector operators and the algebra of matrices. Formulation of eigen vectors and eigen values of a linear vector operator are elaborated using vector algebra. The third unit deals with vector analysis, discussing vector valued functions of a scalar variable and functions of vector argument (both scalar valued and vector valued), thus covering both the scalar vector fields and vector integration.

  16. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  17. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando

    2017-01-01

    aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel

  18. Improved stability and performance from sigma-delta modulators using 1-bit vector quantization

    DEFF Research Database (Denmark)

    Risbo, Lars

    1993-01-01

    A novel class of sigma-delta modulators is presented. The usual scalar 1-b quantizer in a sigma-delta modulator is replaced by a 1-b vector quantizer with a N-dimensional input state-vector from the linear feedback filter. Generally, the vector quantizer changes the nonlinear dynamics...... of the modulator, and a proper choice of vector quantizer can improve both system stability and coding performance. It is shown how to construct the vector quantizer in order to limit the excursions in state-space. The proposed method is demonstrated graphically for a simple second-order modulator...

  19. Surface representations of two- and three-dimensional fluid flow topology

    Science.gov (United States)

    Helman, James L.; Hesselink, Lambertus

    1990-01-01

    We discuss our work using critical point analysis to generate representations of the vector field topology of numerical flow data sets. Critical points are located and characterized in a two-dimensional domain, which may be either a two-dimensional flow field or the tangential velocity field near a three-dimensional body. Tangent curves are then integrated out along the principal directions of certain classes of critical points. The points and curves are linked to form a skeleton representing the two-dimensional vector field topology. When generated from the tangential velocity field near a body in a three-dimensional flow, the skeleton includes the critical points and curves which provide a basis for analyzing the three-dimensional structure of the flow separation. The points along the separation curves in the skeleton are used to start tangent curve integrations to generate surfaces representing the topology of the associated flow separations.

  20. Higher order jet prolongations type gauge natural bundles over vector bundles

    Directory of Open Access Journals (Sweden)

    Jan Kurek

    2004-05-01

    Full Text Available Let $rgeq 3$ and $mgeq 2$ be natural numbers and $E$ be a vector bundle with $m$-dimensional basis. We find all gauge natural bundles ``similar" to the $r$-jet prolongation bundle $J^rE$ of $E$. We also find all gauge natural bundles ``similar" to the vector $r$-tangent bundle $(J^r_{fl}(E,R_0^*$ of $E$.

  1. ON A PROLONGATION CONSTRUCTION FOR LOCAL NON-DIVERGENT VECTOR FIELDS ON Rn

    Directory of Open Access Journals (Sweden)

    A. M. Lukatsky

    2015-01-01

    Full Text Available The problem of a prolongation of non-divergent vector field, defined in a vicinity of zero in Rn t, to a finite non-divergent vector field on Rn is considered. Explicit formulas for the elements of the simple Lie algebra of non-divergent vector from the well-known Cartan series are obtained. This construction allows to move from the Euler equations for the ideal incompressible fluid to the Euler equations on finite-dimensional Lie groups.

  2. Temporal aggregation in first order cointegrated vector autoregressive models

    DEFF Research Database (Denmark)

    La Cour, Lisbeth Funding; Milhøj, Anders

    We study aggregation - or sample frequencies - of time series, e.g. aggregation from weekly to monthly or quarterly time series. Aggregation usually gives shorter time series but spurious phenomena, in e.g. daily observations, can on the other hand be avoided. An important issue is the effect of ...... of aggregation on the adjustment coefficient in cointegrated systems. We study only first order vector autoregressive processes for n dimensional time series Xt, and we illustrate the theory by a two dimensional and a four dimensional model for prices of various grades of gasoline...

  3. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  4. The probability of false positives in zero-dimensional analyses of one-dimensional kinematic, force and EMG trajectories.

    Science.gov (United States)

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2016-06-14

    A false positive is the mistake of inferring an effect when none exists, and although α controls the false positive (Type I error) rate in classical hypothesis testing, a given α value is accurate only if the underlying model of randomness appropriately reflects experimentally observed variance. Hypotheses pertaining to one-dimensional (1D) (e.g. time-varying) biomechanical trajectories are most often tested using a traditional zero-dimensional (0D) Gaussian model of randomness, but variance in these datasets is clearly 1D. The purpose of this study was to determine the likelihood that analyzing smooth 1D data with a 0D model of variance will produce false positives. We first used random field theory (RFT) to predict the probability of false positives in 0D analyses. We then validated RFT predictions via numerical simulations of smooth Gaussian 1D trajectories. Results showed that, across a range of public kinematic, force/moment and EMG datasets, the median false positive rate was 0.382 and not the assumed α=0.05, even for a simple two-sample t test involving N=10 trajectories per group. The median false positive rate for experiments involving three-component vector trajectories was p=0.764. This rate increased to p=0.945 for two three-component vector trajectories, and to p=0.999 for six three-component vectors. This implies that experiments involving vector trajectories have a high probability of yielding 0D statistical significance when there is, in fact, no 1D effect. Either (a) explicit a priori identification of 0D variables or (b) adoption of 1D methods can more tightly control α. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. General supersymmetric solutions of five-dimensional supergravity

    International Nuclear Information System (INIS)

    Gutowski, Jan B.; Sabra, Wafic

    2005-01-01

    The classification of 1/4-supersymmetric solutions of five dimensional gauged supergravity coupled to arbitrary many abelian vector multiplets, which was initiated elsewhere, is completed. The structure of all solutions for which the Killing vector constructed from the Killing spinor is null is investigated in both the gauged and the ungauged theories and some new solutions are constructed

  6. Sistem Deteksi Retinopati Diabetik Menggunakan Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Wahyudi Setiawan

    2014-02-01

    Full Text Available Diabetic Retinopathy is a complication of Diabetes Melitus. It can be a blindness if untreated settled as early as possible. System created in this thesis is the detection of diabetic retinopathy level of the image obtained from fundus photographs. There are three main steps to resolve the problems, preprocessing, feature extraction and classification. Preprocessing methods that used in this system are Grayscale Green Channel, Gaussian Filter, Contrast Limited Adaptive Histogram Equalization and Masking. Two Dimensional Linear Discriminant Analysis (2DLDA is used for feature extraction. Support Vector Machine (SVM is used for classification. The test result performed by taking a dataset of MESSIDOR with number of images that vary for the training phase, otherwise is used for the testing phase. Test result show the optimal accuracy are 84% .   Keywords : Diabetic Retinopathy, Support Vector Machine, Two Dimensional Linear Discriminant Analysis, MESSIDOR

  7. Localization of vector field on dynamical domain wall

    Directory of Open Access Journals (Sweden)

    Masafumi Higuchi

    2017-03-01

    Full Text Available In the previous works (arXiv:1202.5375 and arXiv:1402.1346, the dynamical domain wall, where the four dimensional FRW universe is embedded in the five dimensional space–time, has been realized by using two scalar fields. In this paper, we consider the localization of vector field in three formulations. The first formulation was investigated in the previous paper (arXiv:1510.01099 for the U(1 gauge field. In the second formulation, we investigate the Dvali–Shifman mechanism (arXiv:hep-th/9612128, where the non-abelian gauge field is confined in the bulk but the gauge symmetry is spontaneously broken on the domain wall. In the third formulation, we investigate the Kaluza–Klein modes coming from the five dimensional graviton. In the Randall–Sundrum model, the graviton was localized on the brane. We show that the (5,μ components (μ=0,1,2,3 of the graviton are also localized on the domain wall and can be regarded as the vector field on the domain wall. There are, however, some corrections coming from the bulk extra dimension if the domain wall universe is expanding.

  8. Highly efficient retrograde gene transfer into motor neurons by a lentiviral vector pseudotyped with fusion glycoprotein.

    Directory of Open Access Journals (Sweden)

    Miyabi Hirano

    Full Text Available The development of gene therapy techniques to introduce transgenes that promote neuronal survival and protection provides effective therapeutic approaches for neurological and neurodegenerative diseases. Intramuscular injection of adenoviral and adeno-associated viral vectors, as well as lentiviral vectors pseudotyped with rabies virus glycoprotein (RV-G, permits gene delivery into motor neurons in animal models for motor neuron diseases. Recently, we developed a vector with highly efficient retrograde gene transfer (HiRet by pseudotyping a human immunodeficiency virus type 1 (HIV-1-based vector with fusion glycoprotein B type (FuG-B or a variant of FuG-B (FuG-B2, in which the cytoplasmic domain of RV-G was replaced by the corresponding part of vesicular stomatitis virus glycoprotein (VSV-G. We have also developed another vector showing neuron-specific retrograde gene transfer (NeuRet with fusion glycoprotein C type, in which the short C-terminal segment of the extracellular domain and transmembrane/cytoplasmic domains of RV-G was substituted with the corresponding regions of VSV-G. These two vectors afford the high efficiency of retrograde gene transfer into different neuronal populations in the brain. Here we investigated the efficiency of the HiRet (with FuG-B2 and NeuRet vectors for retrograde gene transfer into motor neurons in the spinal cord and hindbrain in mice after intramuscular injection and compared it with the efficiency of the RV-G pseudotype of the HIV-1-based vector. The main highlight of our results is that the HiRet vector shows the most efficient retrograde gene transfer into both spinal cord and hindbrain motor neurons, offering its promising use as a gene therapeutic approach for the treatment of motor neuron diseases.

  9. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  10. Vectorization of phase space Monte Carlo code in FACOM vector processor VP-200

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1986-01-01

    This paper describes the vectorization techniques for Monte Carlo codes in Fujitsu's Vector Processor System. The phase space Monte Carlo code FOWL is selected as a benchmark, and scalar and vector performances are compared. The vectorized kernel Monte Carlo routine which contains heavily nested IF tests runs up to 7.9 times faster in vector mode than in scalar mode. The overall performance improvement of the vectorized FOWL code over the original scalar code reaches 3.3. The results of this study strongly indicate that supercomputer can be a powerful tool for Monte Carlo simulations in high energy physics. (Auth.)

  11. Kernel Based Nonlinear Dimensionality Reduction and Classification for Genomic Microarray

    Directory of Open Access Journals (Sweden)

    Lan Shu

    2008-07-01

    Full Text Available Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research because they enable massively-parallel assays and simultaneous monitoring of thousands of gene expression of biological samples. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding(LLE is proposed, and fuzzy K-nearest neighbors algorithm which denoises datasets will be introduced as a replacement to the classical LLE’s KNN algorithm. In addition, kernel method based support vector machine (SVM will be used to classify genomic microarray data sets in this paper. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results confirm the superiority and high success rates of the presented method.

  12. Time-dependent behavior of D-dimensional ideal quantum gases

    International Nuclear Information System (INIS)

    Oh, Suhk Kun

    1985-01-01

    The time-dependent behavior of D-dimensional ideal quantum gases is studied within the Mori formalism and its extension by Lee. In the classical limit, the time-dependent behavior is found to be independent of the dimensionality D of the system and is characterized by an extremely damped Gaussian relaxation function. However, at T=0K, it depends on the particular statistics adopted for the system and also on the dimensionality of the system. For the ideal Bose gas at T=0 K, complete Bose condensation is manifested by collapse of the dimensionality of a Hilbert space, spanned by basis vectors fsub(ν), from infinity to two. On the other hand, the dimensional effect for the ideal Fermi gas is exhibited by a change in Hilbert space structure, which is determined by the recurrants Δsub(ν) and the basis vectors fsub(ν) More specifically, the structural form of the recurrants is modified such that the relaxation function becomes more damped as D is increased. (Author)

  13. Principal Components of Superhigh-Dimensional Statistical Features and Support Vector Machine for Improving Identification Accuracies of Different Gear Crack Levels under Different Working Conditions

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available Gears are widely used in gearbox to transmit power from one shaft to another. Gear crack is one of the most frequent gear fault modes found in industry. Identification of different gear crack levels is beneficial in preventing any unexpected machine breakdown and reducing economic loss because gear crack leads to gear tooth breakage. In this paper, an intelligent fault diagnosis method for identification of different gear crack levels under different working conditions is proposed. First, superhigh-dimensional statistical features are extracted from continuous wavelet transform at different scales. The number of the statistical features extracted by using the proposed method is 920 so that the extracted statistical features are superhigh dimensional. To reduce the dimensionality of the extracted statistical features and generate new significant low-dimensional statistical features, a simple and effective method called principal component analysis is used. To further improve identification accuracies of different gear crack levels under different working conditions, support vector machine is employed. Three experiments are investigated to show the superiority of the proposed method. Comparisons with other existing gear crack level identification methods are conducted. The results show that the proposed method has the highest identification accuracies among all existing methods.

  14. Solar monochromatic images in magneto-sensitive spectral lines and maps of vector magnetic fields

    Science.gov (United States)

    Shihui, Y.; Jiehai, J.; Minhan, J.

    1985-01-01

    A new method which allows by use of the monochromatic images in some magneto-sensitive spectra line to derive both the magnetic field strength as well as the angle between magnetic field lines and line of sight for various places in solar active regions is described. In this way two dimensional maps of vector magnetic fields may be constructed. This method was applied to some observational material and reasonable results were obtained. In addition, a project for constructing the three dimensional maps of vector magnetic fields was worked out.

  15. A vectorization of the Hess McDonnell Douglas potential flow program NUED for the STAR-100 computer

    Science.gov (United States)

    Boney, L. R.; Smith, R. E., Jr.

    1979-01-01

    The computer program NUED for analyzing potential flow about arbitrary three dimensional lifting bodies using the panel method was modified to use vector operations and run on the STAR-100 computer. A high speed of computation and ability to approximate the body surface with a large number of panels are characteristics of NUEDV. The new program shows that vector operations can be readily implemented in programs of this type to increase the computational speed on the STAR-100 computer. The virtual memory architecture of the STAR-100 facilitates the use of large numbers of panels to approximate the body surface.

  16. Matrix correlations for high-dimensional data: The modified RV-coefficient

    NARCIS (Netherlands)

    Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van

    2009-01-01

    Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they

  17. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)

    2015-01-15

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  18. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    Science.gov (United States)

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  19. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data

  20. Identification of cardiac rhythm features by mathematical analysis of vector fields.

    Science.gov (United States)

    Fitzgerald, Tamara N; Brooks, Dana H; Triedman, John K

    2005-01-01

    Automated techniques for locating cardiac arrhythmia features are limited, and cardiologists generally rely on isochronal maps to infer patterns in the cardiac activation sequence during an ablation procedure. Velocity vector mapping has been proposed as an alternative method to study cardiac activation in both clinical and research environments. In addition to the visual cues that vector maps can provide, vector fields can be analyzed using mathematical operators such as the divergence and curl. In the current study, conduction features were extracted from velocity vector fields computed from cardiac mapping data. The divergence was used to locate ectopic foci and wavefront collisions, and the curl to identify central obstacles in reentrant circuits. Both operators were applied to simulated rhythms created from a two-dimensional cellular automaton model, to measured data from an in situ experimental canine model, and to complex three-dimensional human cardiac mapping data sets. Analysis of simulated vector fields indicated that the divergence is useful in identifying ectopic foci, with a relatively small number of vectors and with errors of up to 30 degrees in the angle measurements. The curl was useful for identifying central obstacles in reentrant circuits, and the number of velocity vectors needed increased as the rhythm became more complex. The divergence was able to accurately identify canine in situ pacing sites, areas of breakthrough activation, and wavefront collisions. In data from human arrhythmias, the divergence reliably estimated origins of electrical activity and wavefront collisions, but the curl was less reliable at locating central obstacles in reentrant circuits, possibly due to the retrospective nature of data collection. The results indicate that the curl and divergence operators applied to velocity vector maps have the potential to add valuable information in cardiac mapping and can be used to supplement human pattern recognition.

  1. Insect cell transformation vectors that support high level expression and promoter assessment in insect cell culture

    Science.gov (United States)

    A somatic transformation vector, pDP9, was constructed that provides a simplified means of producing permanently transformed cultured insect cells that support high levels of protein expression of foreign genes. The pDP9 plasmid vector incorporates DNA sequences from the Junonia coenia densovirus th...

  2. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Hongchao Song

    2017-01-01

    Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  3. GNSS Single Frequency, Single Epoch Reliable Attitude Determination Method with Baseline Vector Constraint

    Directory of Open Access Journals (Sweden)

    Ang Gong

    2015-12-01

    Full Text Available For Global Navigation Satellite System (GNSS single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.

  4. An accessible four-dimensional treatment of Maxwell's equations in terms of differential forms

    Science.gov (United States)

    Sá, Lucas

    2017-03-01

    Maxwell’s equations are derived in terms of differential forms in the four-dimensional Minkowski representation, starting from the three-dimensional vector calculus differential version of these equations. Introducing all the mathematical and physical concepts needed (including the tool of differential forms), using only knowledge of elementary vector calculus and the local vector version of Maxwell’s equations, the equations are reduced to a simple and elegant set of two equations for a unified quantity, the electromagnetic field. The treatment should be accessible for students taking a first course on electromagnetism.

  5. Urban air quality forecasting based on multi-dimensional collaborative Support Vector Regression (SVR): A case study of Beijing-Tianjin-Shijiazhuang.

    Science.gov (United States)

    Liu, Bing-Chun; Binaykia, Arihant; Chang, Pei-Chann; Tiwari, Manoj Kumar; Tsao, Cheng-Chin

    2017-01-01

    Today, China is facing a very serious issue of Air Pollution due to its dreadful impact on the human health as well as the environment. The urban cities in China are the most affected due to their rapid industrial and economic growth. Therefore, it is of extreme importance to come up with new, better and more reliable forecasting models to accurately predict the air quality. This paper selected Beijing, Tianjin and Shijiazhuang as three cities from the Jingjinji Region for the study to come up with a new model of collaborative forecasting using Support Vector Regression (SVR) for Urban Air Quality Index (AQI) prediction in China. The present study is aimed to improve the forecasting results by minimizing the prediction error of present machine learning algorithms by taking into account multiple city multi-dimensional air quality information and weather conditions as input. The results show that there is a decrease in MAPE in case of multiple city multi-dimensional regression when there is a strong interaction and correlation of the air quality characteristic attributes with AQI. Also, the geographical location is found to play a significant role in Beijing, Tianjin and Shijiazhuang AQI prediction.

  6. High-dimensional quantum cloning and applications to quantum hacking.

    Science.gov (United States)

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim

    2017-02-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.

  7. Numerical simulation using vorticity-vector potential formulation

    Science.gov (United States)

    Tokunaga, Hiroshi

    1993-01-01

    An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the

  8. An implementation of support vector machine on sentiment classification of movie reviews

    Science.gov (United States)

    Yulietha, I. M.; Faraby, S. A.; Adiwijaya; Widyaningtyas, W. C.

    2018-03-01

    With technological advances, all information about movie is available on the internet. If the information is processed properly, it will get the quality of the information. This research proposes to the classify sentiments on movie review documents. This research uses Support Vector Machine (SVM) method because it can classify high dimensional data in accordance with the data used in this research in the form of text. Support Vector Machine is a popular machine learning technique for text classification because it can classify by learning from a collection of documents that have been classified previously and can provide good result. Based on number of datasets, the 90-10 composition has the best result that is 85.6%. Based on SVM kernel, kernel linear with constant 1 has the best result that is 84.9%

  9. Two-Sample Tests for High-Dimensional Linear Regression with an Application to Detecting Interactions.

    Science.gov (United States)

    Xia, Yin; Cai, Tianxi; Cai, T Tony

    2018-01-01

    Motivated by applications in genomics, we consider in this paper global and multiple testing for the comparisons of two high-dimensional linear regression models. A procedure for testing the equality of the two regression vectors globally is proposed and shown to be particularly powerful against sparse alternatives. We then introduce a multiple testing procedure for identifying unequal coordinates while controlling the false discovery rate and false discovery proportion. Theoretical justifications are provided to guarantee the validity of the proposed tests and optimality results are established under sparsity assumptions on the regression coefficients. The proposed testing procedures are easy to implement. Numerical properties of the procedures are investigated through simulation and data analysis. The results show that the proposed tests maintain the desired error rates under the null and have good power under the alternative at moderate sample sizes. The procedures are applied to the Framingham Offspring study to investigate the interactions between smoking and cardiovascular related genetic mutations important for an inflammation marker.

  10. Moduli space of Parabolic vector bundles over hyperelliptic curves

    Indian Academy of Sciences (India)

    27

    This has been generalized for higher dimensional varieties by Maruyama ... Key words and phrases. Parabolic structure .... Let E be a vector bundle of rank r on X. Recall that a parabolic ..... Let us understand this picture geometrically. Let ω1 ...

  11. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    Science.gov (United States)

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  12. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....

  13. A static investigation of the thrust vectoring system of the F/A-18 high-alpha research vehicle

    Science.gov (United States)

    Mason, Mary L.; Capone, Francis J.; Asbury, Scott C.

    1992-01-01

    A static (wind-off) test was conducted in the static test facility of the Langley 16-foot Transonic Tunnel to evaluate the vectoring capability and isolated nozzle performance of the proposed thrust vectoring system of the F/A-18 high alpha research vehicle (HARV). The thrust vectoring system consisted of three asymmetrically spaced vanes installed externally on a single test nozzle. Two nozzle configurations were tested: A maximum afterburner-power nozzle and a military-power nozzle. Vane size and vane actuation geometry were investigated, and an extensive matrix of vane deflection angles was tested. The nozzle pressure ratios ranged from two to six. The results indicate that the three vane system can successfully generate multiaxis (pitch and yaw) thrust vectoring. However, large resultant vector angles incurred large thrust losses. Resultant vector angles were always lower than the vane deflection angles. The maximum thrust vectoring angles achieved for the military-power nozzle were larger than the angles achieved for the maximum afterburner-power nozzle.

  14. Charged particle in higher dimensional weakly charged rotating black hole spacetime

    International Nuclear Information System (INIS)

    Frolov, Valeri P.; Krtous, Pavel

    2011-01-01

    We study charged particle motion in weakly charged higher dimensional black holes. To describe the electromagnetic field we use a test field approximation and the higher dimensional Kerr-NUT-(A)dS metric as a background geometry. It is shown that for a special configuration of the electromagnetic field, the equations of motion of charged particles are completely integrable. The vector potential of such a field is proportional to one of the Killing vectors (called a primary Killing vector) from the 'Killing tower' of symmetry generating objects which exists in the background geometry. A free constant in the definition of the adopted electromagnetic potential is proportional to the electric charge of the higher dimensional black hole. The full set of independent conserved quantities in involution is found. We demonstrate that Hamilton-Jacobi equations are separable, as is the corresponding Klein-Gordon equation and its symmetry operators.

  15. HSM: Heterogeneous Subspace Mining in High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Seidl, Thomas

    2009-01-01

    Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...

  16. Vectorized Fokker-Planck package for the CRAY-1

    International Nuclear Information System (INIS)

    McCoy, M.G.; Mirin, A.A.; Killeen, J.

    1979-08-01

    A program for the solution of the time-dependent, two dimensional, nonlinear, multi-species Fokker-Planck equation is described. The programming is written such that the loop structure is highly vectorizable on the CRAY FORTRAN Compiler. A brief discussion of the Fokker-Planck equation itself is followed by a description of the procedure developed to solve the equation efficiently. The Fokker-Planck equation is a second order partial differential equation whose coefficients depend upon moments of the distribution functions. Both the procedure for the calculation of these coefficients and the procedure for the time advancement of the equation itself must be done efficiently if significant overall time saving is to result. The coefficients are calculated in a series of nested loops, while time advancement is accomplished by a choice of either a splitting or an ADI technique. Overall, timing tests show that the vectorized CRAY program realizes up to a factor of 12 advantage over an optimized CDC-7600 program and up to a factor of 365 over a non-vectorized version of the same program on the CRAY

  17. Feature Import Vector Machine: A General Classifier with Flexible Feature Selection.

    Science.gov (United States)

    Ghosh, Samiran; Wang, Yazhen

    2015-02-01

    The support vector machine (SVM) and other reproducing kernel Hilbert space (RKHS) based classifier systems are drawing much attention recently due to its robustness and generalization capability. General theme here is to construct classifiers based on the training data in a high dimensional space by using all available dimensions. The SVM achieves huge data compression by selecting only few observations which lie close to the boundary of the classifier function. However when the number of observations are not very large (small n ) but the number of dimensions/features are large (large p ), then it is not necessary that all available features are of equal importance in the classification context. Possible selection of an useful fraction of the available features may result in huge data compression. In this paper we propose an algorithmic approach by means of which such an optimal set of features could be selected. In short, we reverse the traditional sequential observation selection strategy of SVM to that of sequential feature selection. To achieve this we have modified the solution proposed by Zhu and Hastie (2005) in the context of import vector machine (IVM), to select an optimal sub-dimensional model to build the final classifier with sufficient accuracy.

  18. Scale invariance, killing vectors, and the size of the fifth dimension

    International Nuclear Information System (INIS)

    Ross, D.K.

    1986-01-01

    An analysis is made of the classical five-dimensional sourceless Kaluza-Klein equations with the existence of the usual α/α/PSI/ Killing vector not assumed, where /PSI/ is the coordinate of the fifth dimension. The physical distance around the fifth dimension D 5 , needed for the calculation of the fine structure constant α, is not calculable in the usual theory because the equations have a global scale invariance. In the present case, the Killing vector and the global scale invariance are not present, but it is found rather generally that D 5 = 0. This indicates that quantum gravity is a necessary ingredient if α is to be calculated. It also provides an alternate explanation of why the universe appears four-dimensional

  19. Source-specific Informative Prior for i-Vector Extraction

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2015-01-01

    An i-vector is a low-dimensional fixed-length representation of a variable-length speech utterance, and is defined as the posterior mean of a latent variable conditioned on the observed feature sequence of an utterance. The assumption is that the prior for the latent variable is non...

  20. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  1. A measurement system for two-dimensional DC-biased properties of magnetic materials

    International Nuclear Information System (INIS)

    Enokizono, M.; Matsuo, H.

    2003-01-01

    So far, the DC-biased magnetic properties have been measured in one dimension (scalar). However, these scalar magnetic properties are not enough to clarify the DC-biased magnetic properties because the scalar magnetic properties cannot exactly take into account the phase difference between the magnetic flux density B vector and the magnetic filed strength H vector. Thus, the magnetic field strength H and magnetic flux density B in magnetic materials must be measured as vector quantities (two-dimensional), directly. We showed the measurement system using a single-sheet tester (SST) to clarify the two-dimensional DC-biased magnetic properties. This system excited AC in Y-direction and DC in X-direction. This paper shows the measurement system using an SST and presents the measurement results of two-dimensional DC-biased magnetic properties when changing the DC exciting voltage and the iron loss

  2. Chromosome preference of disease genes and vectorization for the prediction of non-coding disease genes.

    Science.gov (United States)

    Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan

    2017-10-03

    Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes.

  3. Wideband radar cross section reduction using two-dimensional phase gradient metasurfaces

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yongfeng; Qu, Shaobo; Wang, Jiafu; Chen, Hongya [College of Science, Air Force Engineering University, Xi' an, Shaanxi 710051 (China); Zhang, Jieqiu [College of Science, Air Force Engineering University, Xi' an, Shaanxi 710051 (China); Electronic Materials Research Laboratory, Key Laboratory of Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Xu, Zhuo [Electronic Materials Research Laboratory, Key Laboratory of Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Zhang, Anxue [School of Electronics and Information Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2014-06-02

    Phase gradient metasurface (PGMs) are artificial surfaces that can provide pre-defined in-plane wave-vectors to manipulate the directions of refracted/reflected waves. In this Letter, we propose to achieve wideband radar cross section (RCS) reduction using two-dimensional (2D) PGMs. A 2D PGM was designed using a square combination of 49 split-ring sub-unit cells. The PGM can provide additional wave-vectors along the two in-plane directions simultaneously, leading to either surface wave conversion, deflected reflection, or diffuse reflection. Both the simulation and experiment results verified the wide-band, polarization-independent, high-efficiency RCS reduction induced by the 2D PGM.

  4. Wideband radar cross section reduction using two-dimensional phase gradient metasurfaces

    International Nuclear Information System (INIS)

    Li, Yongfeng; Qu, Shaobo; Wang, Jiafu; Chen, Hongya; Zhang, Jieqiu; Xu, Zhuo; Zhang, Anxue

    2014-01-01

    Phase gradient metasurface (PGMs) are artificial surfaces that can provide pre-defined in-plane wave-vectors to manipulate the directions of refracted/reflected waves. In this Letter, we propose to achieve wideband radar cross section (RCS) reduction using two-dimensional (2D) PGMs. A 2D PGM was designed using a square combination of 49 split-ring sub-unit cells. The PGM can provide additional wave-vectors along the two in-plane directions simultaneously, leading to either surface wave conversion, deflected reflection, or diffuse reflection. Both the simulation and experiment results verified the wide-band, polarization-independent, high-efficiency RCS reduction induced by the 2D PGM.

  5. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    Science.gov (United States)

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  6. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  7. A Visualization of Evolving Clinical Sentiment Using Vector Representations of Clinical Notes.

    Science.gov (United States)

    Ghassemi, Mohammad M; Mark, Roger G; Nemati, Shamim

    2015-09-01

    Our objective in this paper was to visualize the evolution of clinical language and sentiment with respect to several common population-level categories including: time in the hospital, age, mortality, gender and race. Our analysis utilized seven years of unstructured free text notes from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) database. The text data was partitioned by category and used to generate several high dimensional vector space representations. We generated visualizations of the vector spaces using Distributed Stochastic Neighbor Embedding (tSNE) and Principal Component Analysis (PCA). We also investigated representative words from clusters in the vector space. Lastly, we inferred the general sentiment of the clinical notes toward each parameter by gauging the average distance between positive and negative keywords and all other terms in the space. We found intriguing differences in the sentiment of clinical notes over time, outcome, and demographic features. We noted a decrease in the homogeneity and complexity of clusters over time for patients with poor outcomes. We also found greater positive sentiment for females, unmarried patients, and patients of African ethnicity.

  8. In-Vivo High Dynamic Range Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Jensen, Jørgen Arendt

    2015-01-01

    example with a high dynamic velocity range. Velocities with an order of magnitude apart are detected on the femoral artery of a 41 years old healthy individual. Three distinct heart cycles are captured during a 3 secs acquisition. The estimated vector velocities are compared against each other within...... the heart cycle. The relative standard deviation of the measured velocity magnitude between the three peak systoles was found to be 5.11% with a standard deviation on the detected angle of 1.06◦ . In the diastole, it was 1.46% and 6.18◦ , respectively. Results proves that the method is able to estimate flow...

  9. A new estimator for vector velocity estimation [medical ultrasonics

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2001-01-01

    A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...

  10. Vectorization on the star computer of several numerical methods for a fluid flow problem

    Science.gov (United States)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  11. A model for soft high-energy scattering: Tensor pomeron and vector odderon

    Energy Technology Data Exchange (ETDEWEB)

    Ewerz, Carlo, E-mail: C.Ewerz@thphys.uni-heidelberg.de [Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung, Planckstraße 1, D-64291 Darmstadt (Germany); Maniatis, Markos, E-mail: mmaniatis@ubiobio.cl [Departamento de Ciencias Básicas, Universidad del Bío-Bío, Avda. Andrés Bello s/n, Casilla 447, Chillán 3780000 (Chile); Nachtmann, Otto, E-mail: O.Nachtmann@thphys.uni-heidelberg.de [Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, D-69120 Heidelberg (Germany)

    2014-03-15

    A model for soft high-energy scattering is developed. The model is formulated in terms of effective propagators and vertices for the exchange objects: the pomeron, the odderon, and the reggeons. The vertices are required to respect standard rules of QFT. The propagators are constructed taking into account the crossing properties of amplitudes in QFT and the power-law ansätze from the Regge model. We propose to describe the pomeron as an effective spin 2 exchange. This tensor pomeron gives, at high energies, the same results for the pp and pp{sup -bar} elastic amplitudes as the standard Donnachie–Landshoff pomeron. But with our tensor pomeron it is much more natural to write down effective vertices of all kinds which respect the rules of QFT. This is particularly clear for the coupling of the pomeron to particles carrying spin, for instance vector mesons. We describe the odderon as an effective vector exchange. We emphasise that with a tensor pomeron and a vector odderon the corresponding charge-conjugation relations are automatically fulfilled. We compare the model to some experimental data, in particular to data for the total cross sections, in order to determine the model parameters. The model should provide a starting point for a general framework for describing soft high-energy reactions. It should give to experimentalists an easily manageable tool for calculating amplitudes for such reactions and for obtaining predictions which can be compared in detail with data. -- Highlights: •A general model for soft high-energy hadron scattering is developed. •The pomeron is described as effective tensor exchange. •Explicit expressions for effective reggeon–particle vertices are given. •Reggeon–particle and particle–particle vertices are related. •All vertices respect the standard C parity and crossing rules of QFT.

  12. High frame rate synthetic aperture vector flow imaging for transthoracic echocardiography

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Bechsgaard, Thor

    2016-01-01

    This work presents the first in vivo results of 2-D high frame rate vector velocity imaging for transthoracic cardiac imaging. Measurements are made on a healthy volunteer using the SARUS experimental ultrasound scanner connected to an intercostal phased-array probe. Two parasternal long-axis vie...

  13. Automatic Modulation Recognition by Support Vector Machines Using Wavelet Kernel

    Energy Technology Data Exchange (ETDEWEB)

    Feng, X Z; Yang, J; Luo, F L; Chen, J Y; Zhong, X P [College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha (China)

    2006-10-15

    Automatic modulation identification plays a significant role in electronic warfare, electronic surveillance systems and electronic counter measure. The task of modulation recognition of communication signals is to determine the modulation type and signal parameters. In fact, automatic modulation identification can be range to an application of pattern recognition in communication field. The support vector machines (SVM) is a new universal learning machine which is widely used in the fields of pattern recognition, regression estimation and probability density. In this paper, a new method using wavelet kernel function was proposed, which maps the input vector xi into a high dimensional feature space F. In this feature space F, we can construct the optimal hyperplane that realizes the maximal margin in this space. That is to say, we can use SVM to classify the communication signals into two groups, namely analogue modulated signals and digitally modulated signals. In addition, computer simulation results are given at last, which show good performance of the method.

  14. Automatic Modulation Recognition by Support Vector Machines Using Wavelet Kernel

    International Nuclear Information System (INIS)

    Feng, X Z; Yang, J; Luo, F L; Chen, J Y; Zhong, X P

    2006-01-01

    Automatic modulation identification plays a significant role in electronic warfare, electronic surveillance systems and electronic counter measure. The task of modulation recognition of communication signals is to determine the modulation type and signal parameters. In fact, automatic modulation identification can be range to an application of pattern recognition in communication field. The support vector machines (SVM) is a new universal learning machine which is widely used in the fields of pattern recognition, regression estimation and probability density. In this paper, a new method using wavelet kernel function was proposed, which maps the input vector xi into a high dimensional feature space F. In this feature space F, we can construct the optimal hyperplane that realizes the maximal margin in this space. That is to say, we can use SVM to classify the communication signals into two groups, namely analogue modulated signals and digitally modulated signals. In addition, computer simulation results are given at last, which show good performance of the method

  15. Oscillatory regime in the multidimensional homogeneous cosmological models induced by a vector field

    International Nuclear Information System (INIS)

    Benini, R; Kirillov, A A; Montani, Giovanni

    2005-01-01

    We show that in multidimensional gravity, vector fields completely determine the structure and properties of singularity. It turns out that in the presence of a vector field the oscillatory regime exists in all spatial dimensions and for all homogeneous models. By analysing the Hamiltonian equations we derive the Poincare return map associated with the Kasner indexes and fix the rules according to which the Kasner vectors rotate. In correspondence to a four-dimensional spacetime, the oscillatory regime here constructed overlaps the usual Belinski-Khalatnikov-Liftshitz one

  16. Vector independent transmission of the vector-borne bluetongue virus.

    Science.gov (United States)

    van der Sluijs, Mirjam Tineke Willemijn; de Smit, Abraham J; Moormann, Rob J M

    2016-01-01

    Bluetongue is an economically important disease of ruminants. The causative agent, Bluetongue virus (BTV), is mainly transmitted by insect vectors. This review focuses on vector-free BTV transmission, and its epizootic and economic consequences. Vector-free transmission can either be vertical, from dam to fetus, or horizontal via direct contract. For several BTV-serotypes, vertical (transplacental) transmission has been described, resulting in severe congenital malformations. Transplacental transmission had been mainly associated with live vaccine strains. Yet, the European BTV-8 strain demonstrated a high incidence of transplacental transmission in natural circumstances. The relevance of transplacental transmission for the epizootiology is considered limited, especially in enzootic areas. However, transplacental transmission can have a substantial economic impact due to the loss of progeny. Inactivated vaccines have demonstrated to prevent transplacental transmission. Vector-free horizontal transmission has also been demonstrated. Since direct horizontal transmission requires close contact of animals, it is considered only relevant for within-farm spreading of BTV. The genetic determinants which enable vector-free transmission are present in virus strains circulating in the field. More research into the genetic changes which enable vector-free transmission is essential to better evaluate the risks associated with outbreaks of new BTV serotypes and to design more appropriate control measures.

  17. Seasonal changes in the apparent position of the Sun as elementary applications of vector operations

    International Nuclear Information System (INIS)

    Levine, Jonathan

    2014-01-01

    Many introductory courses in physics face an unpleasant chicken-and-egg problem. One might choose to introduce students to physical quantities such as velocity, acceleration, and momentum in over-simplified one-dimensional applications before introducing vectors and their manipulation; or one might first introduce vectors as mathematical objects and defer demonstration of their physical utility. This paper offers a solution to this pedagogical problem: elementary vector operations can be used without mechanics concepts to understand variations in the solar latitude, duration of daylight, and orientation of the rising and setting Sun. I show how sunrise and sunset phenomena lend themselves to exercises with scalar products, vector products, unit vectors, and vector projections that can be useful for introducing vector analysis in the context of physics. (paper)

  18. Self-organized defect strings in two-dimensional crystals.

    Science.gov (United States)

    Lechner, Wolfgang; Polster, David; Maret, Georg; Keim, Peter; Dellago, Christoph

    2013-12-01

    Using experiments with single-particle resolution and computer simulations we study the collective behavior of multiple vacancies injected into two-dimensional crystals. We find that the defects assemble into linear strings, terminated by dislocations with antiparallel Burgers vectors. We show that these defect strings propagate through the crystal in a succession of rapid one-dimensional gliding and rare rotations. While the rotation rate decreases exponentially with the number of defects in the string, the diffusion constant is constant for large strings. By monitoring the separation of the dislocations at the end points, we measure their effective interactions with high precision beyond their spontaneous formation and annihilation, and we explain the double-well form of the dislocation interaction in terms of continuum elasticity theory.

  19. Topics in 2 + 1 and 3 + 1 dimensional physics

    International Nuclear Information System (INIS)

    Camperi, M.F.

    1994-01-01

    This thesis is concerned with the study of two different topics pertaining to two different dimensionalities in Field Theory. First, the issues Chern-Simons Gauge Field Theory in 2 + 1 dimensions, mainly as a field theoretic description of knots and links in three euclidean dimensions is addressed. The author provides both a non-perturbative and a perturbative approach, relating them in the large-N limit. A non-perturbative duality was found between the SU(N) k Chern-Simons theory and the SU(k) N one, providing a possible physical consequences of these constructions, notably the case of Fractional Statistics. Second, this thesis addresses the study of the so-called open-quotes vector modelclose quotes, written in the language of Chiral Perturbation Theory in the physical (3 + 1)-dimensional space time. This model was introduced as a possible way to study the physics of vector and pseudoscalar mesons and is based on the assumption that there is a limit of QCD where the vector mesons become massless. The author relates this model to the Hidden Symmetry Scheme, a model sharing the motivation with the previous one, but based on different assumptions. Considering only well established physical results as vector meson dominance, The thesis concludes that the vector model does not appear to be a good candidate for the effective description of vector mesons

  20. A Subdivision-Based Representation for Vector Image Editing.

    Science.gov (United States)

    Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou

    2012-11-01

    Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.

  1. SCALAR AND VECTOR NONLINEAR DECAYS OF LOW-FREQUENCY ALFVÉN WAVES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, J. S.; Wu, D. J. [Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008 (China); Voitenko, Y.; De Keyser, J., E-mail: js_zhao@pmo.ac.cn [Solar-Terrestrial Centre of Excellence, Space Physics Division, Belgian Institute for Space Aeronomy, Ringlaan 3 Avenue Circulaire, B-1180 Brussels (Belgium)

    2015-02-01

    We found several efficient nonlinear decays for Alfvén waves in the solar wind conditions. Depending on the wavelength, the dominant decay is controlled by the nonlinearities proportional to either scalar or vector products of wavevectors. The two-mode decays of the pump MHD Alfvén wave into co- and counter-propagating product Alfvén and slow waves are controlled by the scalar nonlinearities at long wavelengths ρ{sub i}{sup 2}k{sub 0⊥}{sup 2}<ω{sub 0}/ω{sub ci} (k {sub 0} is wavenumber perpendicular to the background magnetic field, ω{sub 0} is frequency of the pump Alfvén wave, ρ {sub i} is ion gyroradius, and ω {sub ci} is ion-cyclotron frequency). The scalar decays exhibit both local and nonlocal properties and can generate not only MHD-scale but also kinetic-scale Alfvén and slow waves, which can strongly accelerate spectral transport. All waves in the scalar decays propagate in the same plane, hence these decays are two-dimensional. At shorter wavelengths, ρ{sub i}{sup 2}k{sub 0⊥}{sup 2}>ω{sub 0}/ω{sub ci}, three-dimensional vector decays dominate generating out-of-plane product waves. The two-mode decays dominate from MHD up to ion scales ρ {sub i} k {sub 0} ≅ 0.3; at shorter scales the one-mode vector decays become stronger and generate only Alfvén product waves. In the solar wind the two-mode decays have high growth rates >0.1ω{sub 0} and can explain the origin of slow waves observed at kinetic scales.

  2. An accessible four-dimensional treatment of Maxwell's equations in terms of differential forms

    International Nuclear Information System (INIS)

    Sá, Lucas

    2017-01-01

    Maxwell’s equations are derived in terms of differential forms in the four-dimensional Minkowski representation, starting from the three-dimensional vector calculus differential version of these equations. Introducing all the mathematical and physical concepts needed (including the tool of differential forms), using only knowledge of elementary vector calculus and the local vector version of Maxwell’s equations, the equations are reduced to a simple and elegant set of two equations for a unified quantity, the electromagnetic field. The treatment should be accessible for students taking a first course on electromagnetism. (paper)

  3. Model-based Clustering of High-Dimensional Data in Astrophysics

    Science.gov (United States)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  4. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  5. Five-dimensional rotating black hole in a uniform magnetic field: The gyromagnetic ratio

    International Nuclear Information System (INIS)

    Aliev, A.N.; Frolov, Valeri P.

    2004-01-01

    In four-dimensional general relativity, the fact that a Killing vector in a vacuum spacetime serves as a vector potential for a test Maxwell field provides one with an elegant way of describing the behavior of electromagnetic fields near a rotating Kerr black hole immersed in a uniform magnetic field. We use a similar approach to examine the case of a five-dimensional rotating black hole placed in a uniform magnetic field of configuration with biazimuthal symmetry that is aligned with the angular momenta of the Myers-Perry spacetime. Assuming that the black hole may also possess a small electric charge we construct the five-vector potential of the electromagnetic field in the Myers-Perry metric using its three commuting Killing vector fields. We show that, like its four-dimensional counterparts, the five-dimensional Myers-Perry black hole rotating in a uniform magnetic field produces an inductive potential difference between the event horizon and an infinitely distant surface. This potential difference is determined by a superposition of two independent Coulomb fields consistent with the two angular momenta of the black hole and two nonvanishing components of the magnetic field. We also show that a weakly charged rotating black hole in five dimensions possesses two independent magnetic dipole moments specified in terms of its electric charge, mass, and angular momentum parameters. We prove that a five-dimensional weakly charged Myers-Perry black hole must have the value of the gyromagnetic ratio g=3

  6. A high-speed analog neural processor

    NARCIS (Netherlands)

    Masa, P.; Masa, Peter; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    Targeted at high-energy physics research applications, our special-purpose analog neural processor can classify up to 70 dimensional vectors within 50 nanoseconds. The decision-making process of the implemented feedforward neural network enables this type of computation to tolerate weight

  7. Mass effects in three-point chronological current correlators in n-dimensional multifermion models

    International Nuclear Information System (INIS)

    Kucheryavyj, V.I.

    1991-01-01

    Three-types of quantities associated with three-point chronological fermion-current correlators having arbitrary Lorentz and internal structure are calculated in the n-dimensional multifermion models with different masses. The analysis of vector and axial-vector Ward identities for regular (finite) and dimensionally regularized values of these quantities is carried out. Quantum corrections to the canonical Ward identities are obtained. These corrections are generally homogenious functions of zeroth order in masses and under some definite conditions they are reduced to known axial-vector anomalies. The structure and properties of quantum corrections to AVV and AAA correlators in the four-dimension space-time are investigated in detail

  8. A Two-Dimensional Solar Tracking Stationary Guidance Method Based on Feature-Based Time Series

    Directory of Open Access Journals (Sweden)

    Keke Zhang

    2018-01-01

    Full Text Available The amount of satellite energy acquired has a direct impact on operational capacities of the satellite. As for practical high functional density microsatellites, solar tracking guidance design of solar panels plays an extremely important role. Targeted at stationary tracking problems incurred in a new system that utilizes panels mounted in the two-dimensional turntable to acquire energies to the greatest extent, a two-dimensional solar tracking stationary guidance method based on feature-based time series was proposed under the constraint of limited satellite attitude coupling control capability. By analyzing solar vector variation characteristics within an orbit period and solar vector changes within the whole life cycle, such a method could be adopted to establish a two-dimensional solar tracking guidance model based on the feature-based time series to realize automatic switching of feature-based time series and stationary guidance under the circumstance of different β angles and the maximum angular velocity control, which was applicable to near-earth orbits of all orbital inclination. It was employed to design a two-dimensional solar tracking stationary guidance system, and a mathematical simulation for guidance performance was carried out in diverse conditions under the background of in-orbit application. The simulation results show that the solar tracking accuracy of two-dimensional stationary guidance reaches 10∘ and below under the integrated constraints, which meet engineering application requirements.

  9. Analysis of chaos in high-dimensional wind power system.

    Science.gov (United States)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  10. Enhancing poxvirus vectors vaccine immunogenicity.

    Science.gov (United States)

    García-Arriaza, Juan; Esteban, Mariano

    2014-01-01

    Attenuated recombinant poxvirus vectors expressing heterologous antigens from pathogens are currently at various stages in clinical trials with the aim to establish their efficacy. This is because these vectors have shown excellent safety profiles, significant immunogenicity against foreign expressed antigens and are able to induce protective immune responses. In view of the limited efficacy triggered by some poxvirus strains used in clinical trials (i.e, ALVAC in the RV144 phase III clinical trial for HIV), and of the restrictive replication capacity of the highly attenuated vectors like MVA and NYVAC, there is a consensus that further improvements of these vectors should be pursuit. In this review we considered several strategies that are currently being implemented, as well as new approaches, to improve the immunogenicity of the poxvirus vectors. This includes heterologous prime/boost protocols, use of co-stimulatory molecules, deletion of viral immunomodulatory genes still present in the poxvirus genome, enhancing virus promoter strength, enhancing vector replication capacity, optimizing expression of foreign heterologous sequences, and the combined use of adjuvants. An optimized poxvirus vector triggering long-lasting immunity with a high protective efficacy against a selective disease should be sought.

  11. Metrics for vector quantization-based parametric speech enhancement and separation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    Speech enhancement and separation algorithms sometimes employ a two-stage processing scheme, wherein the signal is first mapped to an intermediate low-dimensional parametric description after which the parameters are mapped to vectors in codebooks trained on, for exam- ple, individual noise...

  12. Four-dimensional anti-de Sitter toroidal black holes from a three-dimensional perspective: Full complexity

    International Nuclear Information System (INIS)

    Zanchin, Vilson T.; Kleber, Antares; Lemos, Jose P.S.

    2002-01-01

    The dimensional reduction of black hole solutions in four-dimensional (4D) general relativity is performed and new 3D black hole solutions are obtained. Considering a 4D spacetime with one spacelike Killing vector, it is possible to split the Einstein-Hilbert-Maxwell action with a cosmological term in terms of 3D quantities. Definitions of quasilocal mass and charges in 3D spacetimes are reviewed. The analysis is then particularized to the toroidal charged rotating anti-de Sitter black hole. The reinterpretation of the fields and charges in terms of a three-dimensional point of view is given in each case, and the causal structure analyzed

  13. Singular vectors and invariant equations for the Schroedinger algebra in n ≥ 3 space dimensions. The general case

    International Nuclear Information System (INIS)

    Dobrev, V. K.; Stoimenov, S.

    2010-01-01

    The singular vectors in Verma modules over the Schroedinger algebra s(n) in (n + 1)-dimensional space-time are found for the case of general representations. Using the singular vectors, hierarchies of equations invariant under Schroedinger algebras are constructed.

  14. High-dimensional data in economics and their (robust) analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  15. Design of a mixer for the thrust-vectoring system on the high-alpha research vehicle

    Science.gov (United States)

    Pahle, Joseph W.; Bundick, W. Thomas; Yeager, Jessie C.; Beissner, Fred L., Jr.

    1996-01-01

    One of the advanced control concepts being investigated on the High-Alpha Research Vehicle (HARV) is multi-axis thrust vectoring using an experimental thrust-vectoring (TV) system consisting of three hydraulically actuated vanes per engine. A mixer is used to translate the pitch-, roll-, and yaw-TV commands into the appropriate TV-vane commands for distribution to the vane actuators. A computer-aided optimization process was developed to perform the inversion of the thrust-vectoring effectiveness data for use by the mixer in performing this command translation. Using this process a new mixer was designed for the HARV and evaluated in simulation and flight. An important element of the Mixer is the priority logic, which determines priority among the pitch-, roll-, and yaw-TV commands.

  16. Theoretical study for aerial image intensity in resist in high numerical aperture projection optics and experimental verification with one-dimensional patterns

    Science.gov (United States)

    Shibuya, Masato; Takada, Akira; Nakashima, Toshiharu

    2016-04-01

    In optical lithography, high-performance exposure tools are indispensable to obtain not only fine patterns but also preciseness in pattern width. Since an accurate theoretical method is necessary to predict these values, some pioneer and valuable studies have been proposed. However, there might be some ambiguity or lack of consensus regarding the treatment of diffraction by object, incoming inclination factor onto image plane in scalar imaging theory, and paradoxical phenomenon of the inclined entrance plane wave onto image in vector imaging theory. We have reconsidered imaging theory in detail and also phenomenologically resolved the paradox. By comparing theoretical aerial image intensity with experimental pattern width for one-dimensional pattern, we have validated our theoretical consideration.

  17. Radar target classification method with high accuracy and decision speed performance using MUSIC spectrum vectors and PCA projection

    Science.gov (United States)

    Secmen, Mustafa

    2011-10-01

    This paper introduces the performance of an electromagnetic target recognition method in resonance scattering region, which includes pseudo spectrum Multiple Signal Classification (MUSIC) algorithm and principal component analysis (PCA) technique. The aim of this method is to classify an "unknown" target as one of the "known" targets in an aspect-independent manner. The suggested method initially collects the late-time portion of noise-free time-scattered signals obtained from different reference aspect angles of known targets. Afterward, these signals are used to obtain MUSIC spectrums in real frequency domain having super-resolution ability and noise resistant feature. In the final step, PCA technique is applied to these spectrums in order to reduce dimensionality and obtain only one feature vector per known target. In the decision stage, noise-free or noisy scattered signal of an unknown (test) target from an unknown aspect angle is initially obtained. Subsequently, MUSIC algorithm is processed for this test signal and resulting test vector is compared with feature vectors of known targets one by one. Finally, the highest correlation gives the type of test target. The method is applied to wire models of airplane targets, and it is shown that it can tolerate considerable noise levels although it has a few different reference aspect angles. Besides, the runtime of the method for a test target is sufficiently low, which makes the method suitable for real-time applications.

  18. Video Vectorization via Tetrahedral Remeshing.

    Science.gov (United States)

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  19. A Hilton-Milner theorem for vector spaces

    NARCIS (Netherlands)

    Blokhuis, A.; Brouwer, A.E.; Chowdhury, A.; Frankl, P.; Mussche, T.J.J.; Patkós, B.; Szönyi, T.

    2010-01-01

    We show for k = 2 that if q = 3 and n = 2k + 1, or q = 2 and n = 2k + 2, then any intersecting family F of k-subspaces of an n-dimensional vector space over GF(q) with nF¿F F = 0 has size at most (formula). This bound is sharp as is shown by Hilton-Milner type families. As an application of this

  20. Estimating transmitted waves of floating breakwater using support vector regression model

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Hegde, A.V.; Kumar, V.; Patil, S.G.

    is first mapped onto an m-dimensional feature space using some fixed (nonlinear) mapping, and then a linear model is constructed in this feature space (Ivanciuc Ovidiu 2007). Using mathematical notation, the linear model in the feature space f(x, w... regressive vector machines, Ocean Engineering Journal, Vol – 36, pp 339 – 347, 2009. 3. Ivanciuc Ovidiu, Applications of support vector machines in chemistry, Review in Computational Chemistry, Eds K. B. Lipkouitz and T. R. Cundari, Vol – 23...

  1. Vector supersymmetric multiplets in two dimensions

    International Nuclear Information System (INIS)

    Khattab, Mohammad

    1990-01-01

    Author.The invariance of both, N=1 supersymmetric yang-Mills theory and N-1 supersymmetric off-shell Wess-Zumino model in four dimensions is proved. Dimensional reduction is then applied to obtain super Yang-Mills theory with extended supersymmetry, N=2, in two dimensions. The resulting theory is then truncated to N=1 super Yang-Mills and with further truncation, N=1/2 supersymmetry is shown to be possible. Then, using the duality transformations, we find the off-shell supersymmetry algebra is closed and that the auxiliary fields are replaced by four-rank antisymmetric tensors with Gauge symmetry. Finally, the mechanism of dimensional reduction is then applied to obtain N=2 extended off-shell supersymmetric model with two gauge vector fields

  2. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    Science.gov (United States)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  3. Gravity, two times, tractors, Weyl invariance, and six-dimensional quantum mechanics

    International Nuclear Information System (INIS)

    Bonezzi, R.; Latini, E.; Waldron, A.

    2010-01-01

    Fefferman and Graham showed some time ago that four-dimensional conformal geometries could be analyzed in terms of six-dimensional, ambient, Riemannian geometries admitting a closed homothety. Recently, it was shown how conformal geometry provides a description of physics manifestly invariant under local choices of unit systems. Strikingly, Einstein's equations are then equivalent to the existence of a parallel scale tractor (a six-component vector subject to a certain first order covariant constancy condition at every point in four-dimensional spacetime). These results suggest a six-dimensional description of four-dimensional physics, a viewpoint promulgated by the 2 times physics program of Bars. The Fefferman-Graham construction relies on a triplet of operators corresponding, respectively, to a curved six-dimensional light cone, the dilation generator and the Laplacian. These form an sp(2) algebra which Bars employs as a first class algebra of constraints in a six-dimensional gauge theory. In this article four-dimensional gravity is recast in terms of six-dimensional quantum mechanics by melding the 2 times and tractor approaches. This parent formulation of gravity is built from an infinite set of six-dimensional fields. Successively integrating out these fields yields various novel descriptions of gravity including a new four-dimensional one built from a scalar doublet, a tractor-vector multiplet and a conformal class of metrics.

  4. Vector velocity estimation using directional beam forming and cross-correlation

    DEFF Research Database (Denmark)

    2000-01-01

    The two-dimensional velocity vector using a pulsed ultrasound field can be determined with the invention. The method uses a focused ultrasound field along the velocity direction for probing the moving medium under investigation. Several pulses are emitted and the focused received fields along...

  5. Two-dimensional mapping of three-dimensional SPECT data: a preliminary step to the quantitation of thallium myocardial perfusion single photon emission tomography

    International Nuclear Information System (INIS)

    Goris, M.L.; Boudier, S.; Briandet, P.A.

    1987-01-01

    A method is presented by which tomographic myocardial perfusion data are prepared for quantitative analysis. The method is characterized by an interrogation of the original data, which results in a size and shape normalization. The method is analogous to the circumferential profile methods used in planar scintigraphy but requires a polar-to-cartesian transformation from three to two dimensions. As was the case in the planar situation, centering and reorientation are explicit. The degree of data reduction is evaluated by reconstructing idealized three-dimensional data from the two-dimensional sampling vectors. The method differs from previously described approaches by the absence in the resulting vector of a coordinate reflecting cartesian coordinate in the original data (slice number)

  6. Hyperspectral image classification using Support Vector Machine

    International Nuclear Information System (INIS)

    Moughal, T A

    2013-01-01

    Classification of land cover hyperspectral images is a very challenging task due to the unfavourable ratio between the number of spectral bands and the number of training samples. The focus in many applications is to investigate an effective classifier in terms of accuracy. The conventional multiclass classifiers have the ability to map the class of interest but the considerable efforts and large training sets are required to fully describe the classes spectrally. Support Vector Machine (SVM) is suggested in this paper to deal with the multiclass problem of hyperspectral imagery. The attraction to this method is that it locates the optimal hyper plane between the class of interest and the rest of the classes to separate them in a new high-dimensional feature space by taking into account only the training samples that lie on the edge of the class distributions known as support vectors and the use of the kernel functions made the classifier more flexible by making it robust against the outliers. A comparative study has undertaken to find an effective classifier by comparing Support Vector Machine (SVM) to the other two well known classifiers i.e. Maximum likelihood (ML) and Spectral Angle Mapper (SAM). At first, the Minimum Noise Fraction (MNF) was applied to extract the best possible features form the hyperspectral imagery and then the resulting subset of the features was applied to the classifiers. Experimental results illustrate that the integration of MNF and SVM technique significantly reduced the classification complexity and improves the classification accuracy.

  7. Support vector machines for nuclear reactor state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zavaljevski, N.; Gross, K. C.

    2000-02-14

    Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformed into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.

  8. Support vector machines for nuclear reactor state estimation

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Gross, K. C.

    2000-01-01

    Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformed into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm

  9. Integrating Transgenic Vector Manipulation with Clinical Interventions to Manage Vector-Borne Diseases.

    Directory of Open Access Journals (Sweden)

    Kenichi W Okamoto

    2016-03-01

    Full Text Available Many vector-borne diseases lack effective vaccines and medications, and the limitations of traditional vector control have inspired novel approaches based on using genetic engineering to manipulate vector populations and thereby reduce transmission. Yet both the short- and long-term epidemiological effects of these transgenic strategies are highly uncertain. If neither vaccines, medications, nor transgenic strategies can by themselves suffice for managing vector-borne diseases, integrating these approaches becomes key. Here we develop a framework to evaluate how clinical interventions (i.e., vaccination and medication can be integrated with transgenic vector manipulation strategies to prevent disease invasion and reduce disease incidence. We show that the ability of clinical interventions to accelerate disease suppression can depend on the nature of the transgenic manipulation deployed (e.g., whether vector population reduction or replacement is attempted. We find that making a specific, individual strategy highly effective may not be necessary for attaining public-health objectives, provided suitable combinations can be adopted. However, we show how combining only partially effective antimicrobial drugs or vaccination with transgenic vector manipulations that merely temporarily lower vector competence can amplify disease resurgence following transient suppression. Thus, transgenic vector manipulation that cannot be sustained can have adverse consequences-consequences which ineffective clinical interventions can at best only mitigate, and at worst temporarily exacerbate. This result, which arises from differences between the time scale on which the interventions affect disease dynamics and the time scale of host population dynamics, highlights the importance of accounting for the potential delay in the effects of deploying public health strategies on long-term disease incidence. We find that for systems at the disease-endemic equilibrium, even

  10. High-dimensional Data in Economics and their (Robust) Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  11. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  12. General n-dimensional quadrature transform and its application to interferogram demodulation.

    Science.gov (United States)

    Servin, Manuel; Quiroga, Juan Antonio; Marroquin, Jose Luis

    2003-05-01

    Quadrature operators are useful for obtaining the modulating phase phi in interferometry and temporal signals in electrical communications. In carrier-frequency interferometry and electrical communications, one uses the Hilbert transform to obtain the quadrature of the signal. In these cases the Hilbert transform gives the desired quadrature because the modulating phase is monotonically increasing. We propose an n-dimensional quadrature operator that transforms cos(phi) into -sin(phi) regardless of the frequency spectrum of the signal. With the quadrature of the phase-modulated signal, one can easily calculate the value of phi over all the domain of interest. Our quadrature operator is composed of two n-dimensional vector fields: One is related to the gradient of the image normalized with respect to local frequency magnitude, and the other is related to the sign of the local frequency of the signal. The inner product of these two vector fields gives us the desired quadrature signal. This quadrature operator is derived in the image space by use of differential vector calculus and in the frequency domain by use of a n-dimensional generalization of the Hilbert transform. A robust numerical algorithm is given to find the modulating phase of two-dimensional single-image closed-fringe interferograms by use of the ideas put forward.

  13. Generalized vector calculus on convex domain

    Science.gov (United States)

    Agrawal, Om P.; Xu, Yufeng

    2015-06-01

    In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.

  14. The variety of complete pairs of zero-dimensional subschemes of length 2 of a smooth three-dimensional variety is singular

    International Nuclear Information System (INIS)

    Timofeeva, N V

    2003-01-01

    Equations are obtained that are satisfied by the vectors of the tangent space to the variety X 22 of complete pairs of zero-dimensional subschemes of length 2 of a smooth three-dimensional projective algebraic variety at the most special point of the variety X 22 . It is proved that the system of equations obtained is complete and the variety X 22 is singular

  15. A Tannakian approach to dimensional reduction of principal bundles

    Science.gov (United States)

    Álvarez-Cónsul, Luis; Biswas, Indranil; García-Prada, Oscar

    2017-08-01

    Let P be a parabolic subgroup of a connected simply connected complex semisimple Lie group G. Given a compact Kähler manifold X, the dimensional reduction of G-equivariant holomorphic vector bundles over X × G / P was carried out in Álvarez-Cónsul and García-Prada (2003). This raises the question of dimensional reduction of holomorphic principal bundles over X × G / P. The method of Álvarez-Cónsul and García-Prada (2003) is special to vector bundles; it does not generalize to principal bundles. In this paper, we adapt to equivariant principal bundles the Tannakian approach of Nori, to describe the dimensional reduction of G-equivariant principal bundles over X × G / P, and to establish a Hitchin-Kobayashi type correspondence. In order to be able to apply the Tannakian theory, we need to assume that X is a complex projective manifold.

  16. Fuzzy support vector machine for microarray imbalanced data classification

    Science.gov (United States)

    Ladayya, Faroh; Purnami, Santi Wulan; Irhamah

    2017-11-01

    DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.

  17. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  18. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    Science.gov (United States)

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  19. A hidden non-Abelian monopole in a 16-dimensional isotropic harmonic oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Le, Van-Hoang; Nguyen, Thanh-Son; Phan, Ngoc-Hung [Department of Physics, HCMC University of Pedagogy, 280 An Duong Vuong, Ward 10, Dist. 5, Ho Chi Minh City (Viet Nam)

    2009-05-01

    We suggest one variant of generalization of the Hurwitz transformation by adding seven extra variables that allow an inverse transformation to be obtained. Using this generalized transformation we establish the connection between the Schroedinger equation of a 16-dimensional isotropic harmonic oscillator and that of a nine-dimensional hydrogen-like atom in the field of a monopole described by a septet of potential vectors in a non-Abelian model of 28 operators. The explicit form of the potential vectors and all the commutation relations of the algebra are given./.

  20. A hidden non-Abelian monopole in a 16-dimensional isotropic harmonic oscillator

    International Nuclear Information System (INIS)

    Le, Van-Hoang; Nguyen, Thanh-Son; Phan, Ngoc-Hung

    2009-01-01

    We suggest one variant of generalization of the Hurwitz transformation by adding seven extra variables that allow an inverse transformation to be obtained. Using this generalized transformation we establish the connection between the Schroedinger equation of a 16-dimensional isotropic harmonic oscillator and that of a nine-dimensional hydrogen-like atom in the field of a monopole described by a septet of potential vectors in a non-Abelian model of 28 operators. The explicit form of the potential vectors and all the commutation relations of the algebra are given./

  1. Classification of e-government documents based on cooperative expression of word vectors

    Science.gov (United States)

    Fu, Qianqian; Liu, Hao; Wei, Zhiqiang

    2017-03-01

    The effective document classification is a powerful technique to deal with the huge amount of e-government documents automatically instead of accomplishing them manually. The word-to-vector (word2vec) model, which converts semantic word into low-dimensional vectors, could be successfully employed to classify the e-government documents. In this paper, we propose the cooperative expressions of word vector (Co-word-vector), whose multi-granularity of integration explores the possibility of modeling documents in the semantic space. Meanwhile, we also aim to improve the weighted continuous bag of words model based on word2vec model and distributed representation of topic-words based on LDA model. Furthermore, combining the two levels of word representation, performance result shows that our proposed method on the e-government document classification outperform than the traditional method.

  2. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  3. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  4. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  5. Hydraulic performance numerical simulation of high specific speed mixed-flow pump based on quasi three-dimensional hydraulic design method

    International Nuclear Information System (INIS)

    Zhang, Y X; Su, M; Hou, H C; Song, P F

    2013-01-01

    This research adopts the quasi three-dimensional hydraulic design method for the impeller of high specific speed mixed-flow pump to achieve the purpose of verifying the hydraulic design method and improving hydraulic performance. Based on the two families of stream surface theory, the direct problem is completed when the meridional flow field of impeller is obtained by employing iterative calculation to settle the continuity and momentum equation of fluid. The inverse problem is completed by using the meridional flow field calculated in the direct problem. After several iterations of the direct and inverse problem, the shape of impeller and flow field information can be obtained finally when the result of iteration satisfies the convergent criteria. Subsequently the internal flow field of the designed pump are simulated by using RANS equations with RNG k-ε two-equation turbulence model. The static pressure and streamline distributions at the symmetrical cross-section, the vector velocity distribution around blades and the reflux phenomenon are analyzed. The numerical results show that the quasi three-dimensional hydraulic design method for high specific speed mixed-flow pump improves the hydraulic performance and reveal main characteristics of the internal flow of mixed-flow pump as well as provide basis for judging the rationality of the hydraulic design, improvement and optimization of hydraulic model

  6. Fluidic Vectoring of a Planar Incompressible Jet Flow

    Science.gov (United States)

    Mendez, Miguel Alfonso; Scelzo, Maria Teresa; Enache, Adriana; Buchlin, Jean-Marie

    2018-06-01

    This paper presents an experimental, a numerical and a theoretical analysis of the performances of a fluidic vectoring device for controlling the direction of a turbulent, bi-dimensional and low Mach number (incompressible) jet flow. The investigated design is the co-flow secondary injection with Coanda surface, which allows for vectoring angles up to 25° with no need of moving mechanical parts. A simple empirical model of the vectoring process is presented and validated via experimental and numerical data. The experiments consist of flow visualization and image processing for the automatic detection of the jet centerline; the numerical simulations are carried out solving the Unsteady Reynolds Average Navier- Stokes (URANS) closed with the k - ω SST turbulence model, using the PisoFoam solver from OpenFOAM. The experimental validation on three different geometrical configurations has shown that the model is capable of providing a fast and reliable evaluation of the device performance as a function of the operating conditions.

  7. A Wavelet Kernel-Based Primal Twin Support Vector Machine for Economic Development Prediction

    Directory of Open Access Journals (Sweden)

    Fang Su

    2013-01-01

    Full Text Available Economic development forecasting allows planners to choose the right strategies for the future. This study is to propose economic development prediction method based on the wavelet kernel-based primal twin support vector machine algorithm. As gross domestic product (GDP is an important indicator to measure economic development, economic development prediction means GDP prediction in this study. The wavelet kernel-based primal twin support vector machine algorithm can solve two smaller sized quadratic programming problems instead of solving a large one as in the traditional support vector machine algorithm. Economic development data of Anhui province from 1992 to 2009 are used to study the prediction performance of the wavelet kernel-based primal twin support vector machine algorithm. The comparison of mean error of economic development prediction between wavelet kernel-based primal twin support vector machine and traditional support vector machine models trained by the training samples with the 3–5 dimensional input vectors, respectively, is given in this paper. The testing results show that the economic development prediction accuracy of the wavelet kernel-based primal twin support vector machine model is better than that of traditional support vector machine.

  8. Dynamics of vector dark soliton induced by the Rabi coupling in one-dimensional trapped Bose–Einstein condensates

    International Nuclear Information System (INIS)

    Liu, Chao-Fei; Lu, Min; Liu, Wei-Qing

    2012-01-01

    The Rabi coupling between two components of Bose–Einstein condensates is used to controllably change ordinary dark soliton into dynamic vector dark soliton or ordinary vector dark soliton. When all inter- and intraspecies interactions are equal, the dynamic vector dark soliton is exactly constructed by two sub-dark-solitons, which oscillate with the same velocity and periodically convert with each other. When the interspecies interactions deviate from the intraspecies ones, the whole soliton can maintain its essential shape, but the sub-dark-soliton becomes inexact or is broken. This study indicates that the Rabi coupling can be used to obtain various vector dark solitons. -- Highlights: ► We consider the Rabi coupling to affect the dark soliton in BECs. ► We examine the changes of the initial dark solitons. ► The structure of the soliton depends on the inter- and intraspecies interactions strength. ► The Rabi coupling can be used to obtain various vector dark solitons.

  9. [New strategy for RNA vectorization in mammalian cells. Use of a peptide vector].

    Science.gov (United States)

    Vidal, P; Morris, M C; Chaloin, L; Heitz, F; Divita, G

    1997-04-01

    A major barrier for gene delivery is the low permeability of nucleic acids to cellular membranes. The development of antisenses and gene therapy has focused mainly on improving methods of oligonucleotide or gene delivery to the cell. In this report we described a new strategy for RNA cell delivery, based on a short single peptide. This peptide vector is derived from both the fusion domain of the gp41 protein of HIV and the nuclear localization sequence of the SV40 large T antigen. This peptide vector localizes rapidly to the cytoplasm then to the nucleus of human fibroblasts (HS-68) within a few minutes and exhibits a high affinity for a single-stranded mRNA encoding the p66 subunit of the HIV-1 reverse transcriptase (in a 100 nM range). The peptide/RNA complex formation involves mainly electrostatic interactions between the basic residues of the peptide and the charges on the phosphate group of the RNA. In the presence of the peptide-vector fluorescently-labelled mRNA is delivered into the cytoplasm of mammalian cells (HS68 human fibroblasts) in less than 1 h with a relatively high efficiency (80%). This new concept based on a peptide-derived vector offers several advantages compared to other compounds commonly used in gene delivery. This vector is highly soluble and exhibits no cytotoxicity at the concentrations used for optimal gene delivery. This result clearly supports the fact that this peptide vector is a powerful tool and that it can be used widely, as much for laboratory research as for new applications and development in gene and/or antisense therapy.

  10. High-Throughput Agrobacterium-mediated Transformation of Medicago Truncatula in Comparison to Two Expression Vectors

    International Nuclear Information System (INIS)

    Sultana, T.; Deeba, F.; Naqvi, S. M. S.

    2016-01-01

    Legumes have been turbulent to efficient Agrobacterium-mediated transformation for a long time. The selection of Medicago truncatula as a model legume plant for molecular analysis resulted in the development of efficient Agrobacterium-mediated transformation protocols. In current study, M. truncatula transformed plants expressing OsRGLP1 were obtained through GATEWAY technology using pGOsRGLP1 (pH7WG2.0=OsRGLP1). The transformation efficiency of this vector was compared with expression vector from pCAMBIA series over-expressing same gene (pCOsRGLP1). A lower number of explants generated hygromycin resistant plantlet for instance, 18.3 with pGOsRGLP1 vector as compared to 35.5 percent with pCOsRGLP1 vector. Transformation efficiency of PCR positive plants generated was 9.4 percent for pGOsRGLP1 while 21.6 percent for pCOsRGLP1. Furthermore 24.4 percent of explants generated antibiotic resistant plantlet on 20 mgl/sup -1/ of hygromycin which was higher than on 15 mgl/sup -1/ of hygromycin such as 12.2 percent. T/sub 1/ progeny analysis indicated that the transgene was inherited in Mendelian manner. The functionally active status of transgene was monitored by high level of Superoxide dismutase (SOD) activity in transformed progeny. (author)

  11. A novel and highly efficient production system for recombinant adeno-associated virus vector.

    Science.gov (United States)

    Wu, Zhijian; Wu, Xiaobing; Cao, Hui; Dong, Xiaoyan; Wang, Hong; Hou, Yunde

    2002-02-01

    Recombinant adeno-associated virus (rAAV) has proven to be a promising gene delivery vector for human gene therapy. However, its application has been limited by difficulty in obtaining enough quantities of high-titer vector stocks. In this paper, a novel and highly efficient production system for rAAV is described. A recombinant herpes simplex virus type 1 (rHSV-1) designated HSV1-rc/DeltaUL2, which expressed adeno-associated virus type2 (AAV-2) Rep and Cap proteins, was constructed previously. The data confirmed that its functions were to support rAAV replication and packaging, and the generated rAAV was infectious. Meanwhile, an rAAV proviral cell line designated BHK/SG2, which carried the green fluorescent protein (GFP) gene expression cassette, was established by transfecting BHK-21 cells with rAAV vector plasmid pSNAV-2-GFP. Infecting BHK/SG2 with HSV1-rc/DeltaUL2 at an MOI of 0.1 resulted in the optimal yields of rAAV, reaching 250 transducing unit (TU) or 4.28x10(4) particles per cell. Therefore, compared with the conventional transfection method, the yield of rAAV using this "one proviral cell line, one helper virus" strategy was increased by two orders of magnitude. Large-scale production of rAAV can be easily achieved using this strategy and might meet the demands for clinical trials of rAAV-mediated gene therapy.

  12. Introduction to high-dimensional statistics

    CERN Document Server

    Giraud, Christophe

    2015-01-01

    Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha

  13. Smooth controllability of infinite-dimensional quantum-mechanical systems

    International Nuclear Information System (INIS)

    Wu, Re-Bing; Tarn, Tzyh-Jong; Li, Chun-Wen

    2006-01-01

    Manipulation of infinite-dimensional quantum systems is important to controlling complex quantum dynamics with many practical physical and chemical backgrounds. In this paper, a general investigation is casted to the controllability problem of quantum systems evolving on infinite-dimensional manifolds. Recognizing that such problems are related with infinite-dimensional controllability algebras, we introduce an algebraic mathematical framework to describe quantum control systems possessing such controllability algebras. Then we present the concept of smooth controllability on infinite-dimensional manifolds, and draw the main result on approximate strong smooth controllability. This is a nontrivial extension of the existing controllability results based on the analysis over finite-dimensional vector spaces to analysis over infinite-dimensional manifolds. It also opens up many interesting problems for future studies

  14. High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.

    Science.gov (United States)

    Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton

    2017-11-03

    Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.

  15. New physics/resonances in vector boson scattering at the LHC

    International Nuclear Information System (INIS)

    Reuter, Juergen; Kilian, Wolfgang; Ohl, Thorsten; Sekulla, Marco

    2016-05-01

    Vector boson scattering is (together with the production of multiple electroweak gauge bosons) the key process in the current run 2 of LHC to probe the microscopic nature of electroweak symmetry breaking. Deviations from the Standard Model are generically parameterized by higher-dimensional operators, however, there is a subtle issue of perturbative unitarity for such approaches for the process above. We discuss a parameter-free unitarization prescription to get physically meaningful predictions. In the second part, we construct simplified models for generic new resonances that can appear in vector boson scattering, with a special focus on the technicalities of tensor resonances.

  16. Migration transformation of two-dimensional magnetic vector and tensor fields

    DEFF Research Database (Denmark)

    Zhdanov, Michael; Cai, Hongzhu; Wilson, Glenn

    2012-01-01

    We introduce a new method of rapid interpretation of magnetic vector and tensor field data, based on ideas of potential field migration which extends the general principles of seismic and electromagnetic migration to potential fields. 2-D potential field migration represents a direct integral...... to the downward continuation of a well-behaved analytical function. We present case studies for imaging of SQUID-based magnetic tensor data acquired over a magnetite skarn at Tallawang, Australia. The results obtained from magnetic tensor field migration agree very well with both Euler deconvolution and the known...

  17. A three-dimensional field solutions of Halbach

    International Nuclear Information System (INIS)

    Chen Jizhong; Xiao Jijun; Zhang Yiming; Xu Chunyan

    2008-01-01

    A three-dimensional field solutions are presented for Halback cylinder magnet. Based on Ampere equivalent current methods, the permanent magnets are taken as distributing of current density. For getting the three-dimensional field solution of ideal polarized permanent magnets, the solution method entails the use of the vector potential and involves the closed-form integration of the free-space Green's function. The programmed field solution are ideal for performing rapid parametric studies of the dipole Halback cylinder magnets made from rare earth materials. The field solutions are verified by both an analytical two-dimensional algorithm and three-dimensional finite element software. A rapid method is presented for extensive analyzing and optimizing Halbach cylinder magnet. (authors)

  18. A formula for the Bloch vector of some Lindblad quantum systems

    International Nuclear Information System (INIS)

    Salgado, D.; Sanchez-Gomez, J.L.

    2004-01-01

    Using the Bloch representation of an N-dimensional quantum system and immediate results from quantum stochastic calculus, we establish a closed formula for the Bloch vector, hence also for the density operator, of a quantum system following a Lindblad evolution with selfadjoint Lindblad operators

  19. Distribution of high-dimensional entanglement via an intra-city free-space link.

    Science.gov (United States)

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-07-24

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.

  20. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  1. Three-Dimensional Messages for Interstellar Communication

    Science.gov (United States)

    Vakoch, Douglas A.

    One of the challenges facing independently evolved civilizations separated by interstellar distances is to communicate information unique to one civilization. One commonly proposed solution is to begin with two-dimensional pictorial representations of mathematical concepts and physical objects, in the hope that this will provide a foundation for overcoming linguistic barriers. However, significant aspects of such representations are highly conventional, and may not be readily intelligible to a civilization with different conventions. The process of teaching conventions of representation may be facilitated by the use of three-dimensional representations redundantly encoded in multiple formats (e.g., as both vectors and as rasters). After having illustrated specific conventions for representing mathematical objects in a three-dimensional space, this method can be used to describe a physical environment shared by transmitter and receiver: a three-dimensional space defined by the transmitter--receiver axis, and containing stars within that space. This method can be extended to show three-dimensional representations varying over time. Having clarified conventions for representing objects potentially familiar to both sender and receiver, novel objects can subsequently be depicted. This is illustrated through sequences showing interactions between human beings, which provide information about human behavior and personality. Extensions of this method may allow the communication of such culture-specific features as aesthetic judgments and religious beliefs. Limitations of this approach will be noted, with specific reference to ETI who are not primarily visual.

  2. An Investigation of the High Efficiency Estimation Approach of the Large-Scale Scattered Point Cloud Normal Vector

    Directory of Open Access Journals (Sweden)

    Xianglin Meng

    2018-03-01

    Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.

  3. Genuinely high-dimensional nonlocality optimized by complementary measurements

    International Nuclear Information System (INIS)

    Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung

    2010-01-01

    Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.

  4. TJ-II wave forms analysis with wavelets and support vector machines

    International Nuclear Information System (INIS)

    Dormido-Canto, S.; Farias, G.; Dormido, R.; Vega, J.; Sanchez, J.; Santos, M.

    2004-01-01

    Since the fusion plasma experiment generates hundreds of signals, it is essential to have automatic mechanisms for searching similarities and retrieving of specific data in the wave form database. Wavelet transform (WT) is a transformation that allows one to map signals to spaces of lower dimensionality. Support vector machine (SVM) is a very effective method for general purpose pattern recognition. Given a set of input vectors which belong to two different classes, the SVM maps the inputs into a high-dimensional feature space through some nonlinear mapping, where an optimal separating hyperplane is constructed. In this work, the combined use of WT and SVM is proposed for searching and retrieving similar wave forms in the TJ-II database. In a first stage, plasma signals will be preprocessed by WT to reduce their dimensionality and to extract their main features. In the next stage, and using the smoothed signals produced by the WT, SVM will be applied to show up the efficiency of the proposed method to deal with the problem of sorting out thousands of fusion plasma signals.From observation of several experiments, our WT+SVM method is very viable, and the results seems promising. However, we have further work to do. We have to finish the development of a Matlab toolbox for WT+SVM processing and to include new relevant features in the SVM inputs to improve the technique. We have also to make a better preprocessing of the input signals and to study the performance of other generic and self custom kernels. To reach it, and since the preprocessing stages are very time consuming, we are going to study the viability of using DSPs, RPGAs or parallel programming techniques to reduce the execution time

  5. High-energy vector boson scattering after the Higgs discovery

    International Nuclear Information System (INIS)

    Kilian, Wolfgang; Sekulla, Marco; Ohl, Thorsten; Reuter, Juergen

    2014-08-01

    Weak vector-boson W,Z scattering at high energy probes the Higgs sector and is most sensitive to any new physics associated with electroweak symmetry breaking. We show that in the presence of the 125 GeV Higgs boson, a conventional effective-theory analysis fails for this class of processes. We propose to extrapolate the effective-theory ansatz by an extension of the parameter-free K-matrix unitarization prescription, which we denote as direct T-matrix unitarization. We generalize this prescription to arbitrary non-perturbative models and describe the implementation, as an asymptotically consistent reference model matched to the low-energy effective theory. We present exemplary numerical results for full six-fermion processes at the LHC.

  6. Vectorization of DOT3.5 code

    International Nuclear Information System (INIS)

    Nonomiya, Iwao; Ishiguro, Misako; Tsutsui, Tsuneo

    1990-07-01

    In this report, we describe the vectorization of two-dimensional Sn-method radiation transport code DOT3.5. Vectorized codes are not only the NEA original version developed at ORNL but also the versions improved by JAERI: DOT3.5 FNS version for fusion neutronics analyses, DOT3.5 FER version for fusion reactor design, and ESPRIT module of RADHEAT-V4 code system for radiation shielding and radiation transport analyses. In DOT3.5, input/output processing time amounts to a great part of the elapsed time when a large number of energy groups and/or a large number of spatial mesh points are used in the calculated problem. Therefore, an improvement has been made for the speedup of input/output processing in the DOT3.5 FNS version, and DOT-DD (Double Differential cross section) code. The total speedup ratio of vectorized version to the original scalar one is 1.7∼1.9 for DOT3.5 NEA version, 2.2∼2.3 fro DOT3.5 FNS version, 1.7 for DOT3.5 FER version, and 3.1∼4.4 for RADHEAT-V4, respectively. The elapsed times for improved DOT3.5 FNS version and DOT-DD are reduced to 50∼65% that of the original version by the input/output speedup. In this report, we describe summary of codes, the techniques used for the vectorization and input/output speedup, verification of computed results, and speedup effect. (author)

  7. Self-trapping of scalar and vector dipole solitary waves in Kerr media

    International Nuclear Information System (INIS)

    Zhong Weiping; Belic, Milivoj R.; Assanto, Gaetano; Malomed, Boris A.; Huang Tingwen

    2011-01-01

    We report solutions for expanding dipole-type optical solitary waves in two-dimensional Kerr media with the self-focusing nonlinearity, using exact analytical (Hirota) and numerical methods. Such localized beams carry intrinsic vorticity and exhibit symmetric shapes for both scalar and vector solitary modes. When vector beams are close to the scalar limit, simulations demonstrate their stability over propagation distances exceeding 50 diffraction lengths. In fact, the continuous expansion helps the vortical beams avoid the instability against the splitting, collapse, or decay, making them 'convectively stable' patterns.

  8. An assessment of support vector machines for land cover classification

    Science.gov (United States)

    Huang, C.; Davis, L.S.; Townshend, J.R.G.

    2002-01-01

    The support vector machine (SVM) is a group of theoretically superior machine learning algorithms. It was found competitive with the best available machine learning algorithms in classifying high-dimensional data sets. This paper gives an introduction to the theoretical development of the SVM and an experimental evaluation of its accuracy, stability and training speed in deriving land cover classifications from satellite images. The SVM was compared to three other popular classifiers, including the maximum likelihood classifier (MLC), neural network classifiers (NNC) and decision tree classifiers (DTC). The impacts of kernel configuration on the performance of the SVM and of the selection of training data and input variables on the four classifiers were also evaluated in this experiment.

  9. Multi-dimensional quasitoeplitz Markov chains

    Directory of Open Access Journals (Sweden)

    Alexander N. Dudin

    1999-01-01

    Full Text Available This paper deals with multi-dimensional quasitoeplitz Markov chains. We establish a sufficient equilibrium condition and derive a functional matrix equation for the corresponding vector-generating function, whose solution is given algorithmically. The results are demonstrated in the form of examples and applications in queues with BMAP-input, which operate in synchronous random environment.

  10. -Dimensional Fractional Lagrange's Inversion Theorem

    Directory of Open Access Journals (Sweden)

    F. A. Abd El-Salam

    2013-01-01

    Full Text Available Using Riemann-Liouville fractional differential operator, a fractional extension of the Lagrange inversion theorem and related formulas are developed. The required basic definitions, lemmas, and theorems in the fractional calculus are presented. A fractional form of Lagrange's expansion for one implicitly defined independent variable is obtained. Then, a fractional version of Lagrange's expansion in more than one unknown function is generalized. For extending the treatment in higher dimensions, some relevant vectors and tensors definitions and notations are presented. A fractional Taylor expansion of a function of -dimensional polyadics is derived. A fractional -dimensional Lagrange inversion theorem is proved.

  11. Three-dimensional volumetric display by inclined-plane scanning

    Science.gov (United States)

    Miyazaki, Daisuke; Eto, Takuma; Nishimura, Yasuhiro; Matsushita, Kenji

    2003-05-01

    A volumetric display system based on three-dimensional (3-D) scanning that uses an inclined two-dimensional (2-D) image is described. In the volumetric display system a 2-D display unit is placed obliquely in an imaging system into which a rotating mirror is inserted. When the mirror is rotated, the inclined 2-D image is moved laterally. A locus of the moving image can be observed by persistence of vision as a result of the high-speed rotation of the mirror. Inclined cross-sectional images of an object are displayed on the display unit in accordance with the position of the image plane to observe a 3-D image of the object by persistence of vision. Three-dimensional images formed by this display system satisfy all the criteria for stereoscopic vision. We constructed the volumetric display systems using a galvanometer mirror and a vector-scan display unit. In addition, we constructed a real-time 3-D measurement system based on a light section method. Measured 3-D images can be reconstructed in the 3-D display system in real time.

  12. Performance and optimization of support vector machines in high-energy physics classification problems

    International Nuclear Information System (INIS)

    Sahin, M.Ö.; Krücker, D.; Melzer-Pellmann, I.-A.

    2016-01-01

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications.

  13. Performance and optimization of support vector machines in high-energy physics classification problems

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, M.Ö., E-mail: ozgur.sahin@desy.de; Krücker, D., E-mail: dirk.kruecker@desy.de; Melzer-Pellmann, I.-A., E-mail: isabell.melzer@desy.de

    2016-12-01

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications.

  14. Topology of high-dimensional manifolds

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)

    2002-08-15

    The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.

  15. Properties of vector and axial-vector mesons from a generalized Nambu-Jona-Lasinio model

    International Nuclear Information System (INIS)

    Bernard, V.; Meissner, U.G.; Massachusetts Inst. of Tech., Cambridge; Massachusetts Inst. of Tech., Cambridge

    1988-01-01

    We construct a generalized Nambu-Jona-Lasinio lagrangian including scalar, pseudoscalar, vector and axial-vector mesons. We specialize to the two-flavor case. The properties of the structured vacuum as well as meson masses and coupling constants are calculated giving an overall agreement within 20% of the experimental data. We investigate the meson properties at finite density. In contrast to the mass of the scalar σ-meson, which decreases sharply with increasing density, the vector meson masses are almost independent of density. Furthermore, the vector-meson-quark coupling constants are also stable against density changes. We point out that these results imply a softening of the nuclear equation of state at high densities. Furthermore, we discuss the breakdown of the KFSR relation on the quark level as well as other deviations from phenomenological concepts such as universality and vector meson dominance. (orig.)

  16. GPU Accelerated Vector Median Filter

    Science.gov (United States)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  17. On High Dimensional Searching Spaces and Learning Methods

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz

    2017-01-01

    , and similarity functions and discuss the pros and cons of using each of them. Conventional similarity functions evaluate objects in the vector space. Contrarily, Weighted Feature Distance (WFD) functions compare data objects in both feature and vector spaces, preventing the system from being affected by some...

  18. Properties of Vector Preisach Models

    Science.gov (United States)

    Kahler, Gary R.; Patel, Umesh D.; Torre, Edward Della

    2004-01-01

    This paper discusses rotational anisotropy and rotational accommodation of magnetic particle tape. These effects have a performance impact during the reading and writing of the recording process. We introduce the reduced vector model as the basis for the computations. Rotational magnetization models must accurately compute the anisotropic characteristics of ellipsoidally magnetizable media. An ellipticity factor is derived for these media that computes the two-dimensional magnetization trajectory for all applied fields. An orientation correction must be applied to the computed rotational magnetization. For isotropic materials, an orientation correction has been developed and presented. For anisotropic materials, an orientation correction is introduced.

  19. Two-dimensional impurity transport calculations for a high recycling divertor

    International Nuclear Information System (INIS)

    Brooks, J.N.

    1986-04-01

    Two dimensional analysis of impurity transport in a high recycling divertor shows asymmetric particle fluxes to the divertor plate, low helium pumping efficiency, and high scrapeoff zone shielding for sputtered impurities

  20. Spanning forests and the vector bundle Laplacian

    OpenAIRE

    Kenyon, Richard

    2011-01-01

    The classical matrix-tree theorem relates the determinant of the combinatorial Laplacian on a graph to the number of spanning trees. We generalize this result to Laplacians on one- and two-dimensional vector bundles, giving a combinatorial interpretation of their determinants in terms of so-called cycle rooted spanning forests (CRSFs). We construct natural measures on CRSFs for which the edges form a determinantal process. ¶ This theory gives a natural generalization of the spanning tre...

  1. Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering

    Science.gov (United States)

    Yacoubou Djima, Karamatou A.; Simonelli, Lucia D.; Cunningham, Denise; Czaja, Wojciech

    2015-03-01

    We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.

  2. On the Zeeman Effect in highly excited atoms: 2. Three-dimensional case

    International Nuclear Information System (INIS)

    Baseia, B.; Medeiros e Silva Filho, J.

    1984-01-01

    A previous result, found in two-dimensional hydrogen-atoms, is extended to the three-dimensional case. A mapping of a four-dimensional space R 4 onto R 3 , that establishes an equivalence between Coulomb and harmonic potentials, is used to show that the exact solution of the Zeeman effect in highly excited atoms, cannot be reached. (Author) [pt

  3. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huttmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.; Bednarczyk, P.

    1992-01-01

    High resolution γ-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig

  4. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S; Huttmeier, U J; France, G de; Haas, B; Romain, P; Theisen, Ch; Vivien, J P; Zen, J [Centre National de la Recherche Scientifique (CNRS), 67 - Strasbourg (France); Bednarczyk, P [Institute of Nuclear Physics, Cracow (Poland)

    1992-08-01

    High resolution {gamma}-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig.

  5. Permutation entropy with vector embedding delays

    Science.gov (United States)

    Little, Douglas J.; Kane, Deb M.

    2017-12-01

    Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.

  6. On Robust Information Extraction from High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science

  7. Real-Time GPU Implementation of Transverse Oscillation Vector Velocity Flow Imaging

    DEFF Research Database (Denmark)

    Bradway, David; Pihl, Michael Johannes; Krebs, Andreas

    2014-01-01

    Rapid estimation of blood velocity and visualization of complex flow patterns are important for clinical use of diagnostic ultrasound. This paper presents real-time processing for two-dimensional (2-D) vector flow imaging which utilizes an off-the-shelf graphics processing unit (GPU). In this work...... vector flow acquisition takes 2.3 milliseconds seconds on an Advanced Micro Devices Radeon HD 7850 GPU card. The detected velocities are accurate to within the precision limit of the output format of the display routine. Because this tool was developed as a module external to the scanner’s built...

  8. Holographic vector superconductor in Gauss–Bonnet gravity

    Directory of Open Access Journals (Sweden)

    Jun-Wang Lu

    2016-02-01

    Full Text Available In the probe limit, we numerically study the holographic p-wave superconductor phase transitions in the higher curvature theory. Concretely, we study the influences of Gauss–Bonnet parameter α on the Maxwell complex vector model (MCV in the five-dimensional Gauss–Bonnet–AdS black hole and soliton backgrounds, respectively. In the two backgrounds, the improving Gauss–Bonnet parameter α and dimension of the vector operator Δ inhibit the vector condensate. In the black hole, the condensate quickly saturates a stable value at lower temperature. Moreover, both the stable value of condensate and the ratio ωg/Tc increase with α. In the soliton, the location of the second pole of the imaginary part increases with α, which implies that the energy of the quasiparticle excitation increases with the improving higher curvature correction. In addition, the influences of the Gauss–Bonnet correction on the MCV model are similar to the ones on the SU(2 p-wave model, which confirms that the MCV model is a generalization of the SU(2 Yang–Mills model even without the applied magnetic field to some extent.

  9. VECTOR TOMOGRAPHY FOR THE CORONAL MAGNETIC FIELD. II. HANLE EFFECT MEASUREMENTS

    International Nuclear Information System (INIS)

    Kramar, M.; Inhester, B.; Lin, H.; Davila, J.

    2013-01-01

    In this paper, we investigate the feasibility of saturated coronal Hanle effect vector tomography or the application of vector tomographic inversion techniques to reconstruct the three-dimensional magnetic field configuration of the solar corona using linear polarization measurements of coronal emission lines. We applied Hanle effect vector tomographic inversion to artificial data produced from analytical coronal magnetic field models with equatorial and meridional currents and global coronal magnetic field models constructed by extrapolation of real photospheric magnetic field measurements. We tested tomographic inversion with only Stokes Q, U, electron density, and temperature inputs to simulate observations over large limb distances where the Stokes I parameters are difficult to obtain with ground-based coronagraphs. We synthesized the coronal linear polarization maps by inputting realistic noise appropriate for ground-based observations over a period of two weeks into the inversion algorithm. We found that our Hanle effect vector tomographic inversion can partially recover the coronal field with a poloidal field configuration, but that it is insensitive to a corona with a toroidal field. This result demonstrates that Hanle effect vector tomography is an effective tool for studying the solar corona and that it is complementary to Zeeman effect vector tomography for the reconstruction of the coronal magnetic field

  10. Next generation of adeno-associated virus 2 vectors: Point mutations in tyrosines lead to high-efficiency transduction at lower doses

    Science.gov (United States)

    Zhong, Li; Li, Baozheng; Mah, Cathryn S.; Govindasamy, Lakshmanan; Agbandje-McKenna, Mavis; Cooper, Mario; Herzog, Roland W.; Zolotukhin, Irene; Warrington, Kenneth H.; Weigel-Van Aken, Kirsten A.; Hobbs, Jacqueline A.; Zolotukhin, Sergei; Muzyczka, Nicholas; Srivastava, Arun

    2008-01-01

    Recombinant adeno-associated virus 2 (AAV2) vectors are in use in several Phase I/II clinical trials, but relatively large vector doses are needed to achieve therapeutic benefits. Large vector doses also trigger an immune response as a significant fraction of the vectors fails to traffic efficiently to the nucleus and is targeted for degradation by the host cell proteasome machinery. We have reported that epidermal growth factor receptor protein tyrosine kinase (EGFR-PTK) signaling negatively affects transduction by AAV2 vectors by impairing nuclear transport of the vectors. We have also observed that EGFR-PTK can phosphorylate AAV2 capsids at tyrosine residues. Tyrosine-phosphorylated AAV2 vectors enter cells efficiently but fail to transduce effectively, in part because of ubiquitination of AAV capsids followed by proteasome-mediated degradation. We reasoned that mutations of the surface-exposed tyrosine residues might allow the vectors to evade phosphorylation and subsequent ubiquitination and, thus, prevent proteasome-mediated degradation. Here, we document that site-directed mutagenesis of surface-exposed tyrosine residues leads to production of vectors that transduce HeLa cells ≈10-fold more efficiently in vitro and murine hepatocytes nearly 30-fold more efficiently in vivo at a log lower vector dose. Therapeutic levels of human Factor IX (F.IX) are also produced at an ≈10-fold reduced vector dose. The increased transduction efficiency of tyrosine-mutant vectors is due to lack of capsid ubiquitination and improved intracellular trafficking to the nucleus. These studies have led to the development of AAV vectors that are capable of high-efficiency transduction at lower doses, which has important implications in their use in human gene therapy. PMID:18511559

  11. Fractal electrodynamics via non-integer dimensional space approach

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-09-01

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested.

  12. High-resolution coherent three-dimensional spectroscopy of Br2.

    Science.gov (United States)

    Chen, Peter C; Wells, Thresa A; Strangfeld, Benjamin R

    2013-07-25

    In the past, high-resolution spectroscopy has been limited to small, simple molecules that yield relatively uncongested spectra. Larger and more complex molecules have a higher density of peaks and are susceptible to complications (e.g., effects from conical intersections) that can obscure the patterns needed to resolve and assign peaks. Recently, high-resolution coherent two-dimensional (2D) spectroscopy has been used to resolve and sort peaks into easily identifiable patterns for molecules where pattern-recognition has been difficult. For very highly congested spectra, however, the ability to resolve peaks using coherent 2D spectroscopy is limited by the bandwidth of instrumentation. In this article, we introduce and investigate high-resolution coherent three-dimensional spectroscopy (HRC3D) as a method for dealing with heavily congested systems. The resulting patterns are unlike those in high-resolution coherent 2D spectra. Analysis of HRC3D spectra could provide a means for exploring the spectroscopy of large and complex molecules that have previously been considered too difficult to study.

  13. Decomposition of group-velocity-locked-vector-dissipative solitons and formation of the high-order soliton structure by the product of their recombination.

    Science.gov (United States)

    Wang, Xuan; Li, Lei; Geng, Ying; Wang, Hanxiao; Su, Lei; Zhao, Luming

    2018-02-01

    By using a polarization manipulation and projection system, we numerically decomposed the group-velocity-locked-vector-dissipative solitons (GVLVDSs) from a normal dispersion fiber laser and studied the combination of the projections of the phase-modulated components of the GVLVDS through a polarization beam splitter. Pulses with a structure similar to a high-order vector soliton could be obtained, which could be considered as a pseudo-high-order GVLVDS. It is found that, although GVLVDSs are intrinsically different from group-velocity-locked-vector solitons generated in fiber lasers operated in the anomalous dispersion regime, similar characteristics for the generation of pseudo-high-order GVLVDS are obtained. However, pulse chirp plays a significant role on the generation of pseudo-high-order GVLVDS.

  14. Formulation of 11-dimensional supergravity in superspace

    International Nuclear Information System (INIS)

    Cremmer, E.; Ferrara, S.

    1980-01-01

    We formulate on-shell 11-dimensional supergravity in superspace and express its equations of motion in terms of purely geometrical quantities. All torsion and curvature components are solved in terms of a single superfield Wsub(rstu), totally antisymmetric in its (flat vector) indices. The dimensional reduction of this formulation is expected to be related to the superspace formulation of N = 8 extended supergravity and might explain the origin of the hidden (local) SU(8) and (global) E 7 symmetries present in this theory. (orig.)

  15. The vectorization of a ray tracing program for image generation

    Science.gov (United States)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  16. Inference in High-dimensional Dynamic Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Tang, Haihan

    We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...

  17. Manifold learning to interpret JET high-dimensional operational space

    International Nuclear Information System (INIS)

    Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A

    2013-01-01

    In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)

  18. Does the delta quench Gamow-Teller strength in (p,n)- and (p vector,p vector')-reactions

    International Nuclear Information System (INIS)

    Osterfeld, F.; Schulte, A.; Udagawa, T.; Yabe, M.

    1986-01-01

    Microscopic analyses of complete forward angle intermediate energy (p,n)-, ( 3 He,t)- and (p vector,p vector')-spin-flip spectra are presented for the reactions 90 Zr(p,n), 90 Zr( 3 He,t) and 90 Zr(p vector,p vector'). It is shown that the whole spectra up to high excitation energies (E X ∝50 MeV) are the result of correlated one-particle-one-hole (1p1h) spin-isospin transitions only. The spectra reflect, therefore, the linear spin-isospin response of the target nucleus to the probing external hadronic fields. Our results suggest that the measured (p,n)-, ( 3 He,t)- and (p vector,p vector')-cross sections are compatible with the transition strength predictions as obtained from random phase approximation (RPA) calculations. This means that the Δ isobar quenching mechanism is likely to be rather small. (orig.)

  19. Tungsten disulphide based all fiber Q-switching cylindrical-vector beam generation

    Energy Technology Data Exchange (ETDEWEB)

    Lin, J.; Yan, K.; Zhou, Y. [Department of Optics and Optical Engineering, University of Science and Technology of China, Hefei 230026 (China); Xu, L. X., E-mail: xulixin@ustc.edu.cn; Gu, C. [Department of Optics and Optical Engineering, University of Science and Technology of China, Hefei 230026 (China); Haixi Collaborative Innovation Center for New Display Devices and Systems Integration, Fuzhou University, Fuzhou 350002 (China); Zhan, Q. W. [Electro-Optics Program, University of Dayton, Dayton, Ohio 45469 (United States)

    2015-11-09

    We proposed and demonstrated an all fiber passively Q-switching laser to generate cylindrical-vector beam, a two dimensional material, tungsten disulphide (WS{sub 2}), was adopted as a saturable absorber inside the laser cavity, while a few-mode fiber Bragg grating was used as a transverse mode-selective output coupler. The repetition rate of the Q-switching output pulses can be varied from 80 kHz to 120 kHz with a shortest duration of 958 ns. Attributed to the high damage threshold and polarization insensitivity of the WS{sub 2} based saturable absorber, the radially polarized beam and azimuthally polarized beam can be easily generated in the Q-switching fiber laser.

  20. Multiple-output support vector machine regression with feature selection for arousal/valence space emotion assessment.

    Science.gov (United States)

    Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A

    2014-01-01

    Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).

  1. Effective SIMD Vectorization for Intel Xeon Phi Coprocessors

    OpenAIRE

    Tian, Xinmin; Saito, Hideki; Preis, Serguei V.; Garcia, Eric N.; Kozhukhov, Sergey S.; Masten, Matt; Cherkasov, Aleksei G.; Panchenko, Nikolay

    2015-01-01

    Efficiently exploiting SIMD vector units is one of the most important aspects in achieving high performance of the application code running on Intel Xeon Phi coprocessors. In this paper, we present several effective SIMD vectorization techniques such as less-than-full-vector loop vectorization, Intel MIC specific alignment optimization, and small matrix transpose/multiplication 2D vectorization implemented in the Intel C/C++ and Fortran production compilers for Intel Xeon Phi coprocessors. A ...

  2. High-dimensional orbital angular momentum entanglement concentration based on Laguerre–Gaussian mode selection

    International Nuclear Information System (INIS)

    Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang

    2013-01-01

    Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)

  3. Representation theory of 2-groups on finite dimensional 2-vector spaces

    OpenAIRE

    Elgueta, Josep

    2004-01-01

    In this paper, the 2-category $\\mathfrak{Rep}_{{\\bf 2Mat}_{\\mathbb{C}}}(\\mathbb{G})$ of (weak) representations of an arbitrary (weak) 2-group $\\mathbb{G}$ on (some version of) Kapranov and Voevodsky's 2-category of (complex) 2-vector spaces is studied. In particular, the set of equivalence classes of representations is computed in terms of the invariants $\\pi_0(\\mathbb{G})$, $\\pi_1(\\mathbb{G})$ and $[\\alpha]\\in H^3(\\pi_0(\\mathbb{G}),\\pi_1(\\mathbb{G}))$ classifying $\\mathbb{G}$. Also the categ...

  4. Emerging Vector-Borne Diseases - Incidence through Vectors.

    Science.gov (United States)

    Savić, Sara; Vidić, Branka; Grgić, Zivoslav; Potkonjak, Aleksandar; Spasojevic, Ljubica

    2014-01-01

    Vector-borne diseases use to be a major public health concern only in tropical and subtropical areas, but today they are an emerging threat for the continental and developed countries also. Nowadays, in intercontinental countries, there is a struggle with emerging diseases, which have found their way to appear through vectors. Vector-borne zoonotic diseases occur when vectors, animal hosts, climate conditions, pathogens, and susceptible human population exist at the same time, at the same place. Global climate change is predicted to lead to an increase in vector-borne infectious diseases and disease outbreaks. It could affect the range and population of pathogens, host and vectors, transmission season, etc. Reliable surveillance for diseases that are most likely to emerge is required. Canine vector-borne diseases represent a complex group of diseases including anaplasmosis, babesiosis, bartonellosis, borreliosis, dirofilariosis, ehrlichiosis, and leishmaniosis. Some of these diseases cause serious clinical symptoms in dogs and some of them have a zoonotic potential with an effect to public health. It is expected from veterinarians in coordination with medical doctors to play a fundamental role at primarily prevention and then treatment of vector-borne diseases in dogs. The One Health concept has to be integrated into the struggle against emerging diseases. During a 4-year period, from 2009 to 2013, a total number of 551 dog samples were analyzed for vector-borne diseases (borreliosis, babesiosis, ehrlichiosis, anaplasmosis, dirofilariosis, and leishmaniasis) in routine laboratory work. The analysis was done by serological tests - ELISA for borreliosis, dirofilariosis, and leishmaniasis, modified Knott test for dirofilariosis, and blood smear for babesiosis, ehrlichiosis, and anaplasmosis. This number of samples represented 75% of total number of samples that were sent for analysis for different diseases in dogs. Annually, on average more then half of the samples

  5. Representation and display of vector field topology in fluid flow data sets

    Science.gov (United States)

    Helman, James; Hesselink, Lambertus

    1989-01-01

    The visualization of physical processes in general and of vector fields in particular is discussed. An approach to visualizing flow topology that is based on the physics and mathematics underlying the physical phenomenon is presented. It involves determining critical points in the flow where the velocity vector vanishes. The critical points, connected by principal lines or planes, determine the topology of the flow. The complexity of the data is reduced without sacrificing the quantitative nature of the data set. By reducing the original vector field to a set of critical points and their connections, a representation of the topology of a two-dimensional vector field that is much smaller than the original data set but retains with full precision the information pertinent to the flow topology is obtained. This representation can be displayed as a set of points and tangent curves or as a graph. Analysis (including algorithms), display, interaction, and implementation aspects are discussed.

  6. A memory efficient method for fully three-dimensional object reconstruction with HAADF STEM

    International Nuclear Information System (INIS)

    Van den Broek, W.; Rosenauer, A.; Van Aert, S.; Sijbers, J.; Van Dyck, D.

    2014-01-01

    The conventional approach to object reconstruction through electron tomography is to reduce the three-dimensional problem to a series of independent two-dimensional slice-by-slice reconstructions. However, at atomic resolution the image of a single atom extends over many such slices and incorporating this image as prior knowledge in tomography or depth sectioning therefore requires a fully three-dimensional treatment. Unfortunately, the size of the three-dimensional projection operator scales highly unfavorably with object size and readily exceeds the available computer memory. In this paper, it is shown that for incoherent image formation the memory requirement can be reduced to the fundamental lower limit of the object size, both for tomography and depth sectioning. Furthermore, it is shown through multislice calculations that high angle annular dark field scanning transmission electron microscopy can be sufficiently incoherent for the reconstruction of single element nanocrystals, but that dynamical diffraction effects can cause classification problems if more than one element is present. - Highlights: • The full 3D approach to atomic resolution object retrieval has high memory load. • For incoherent imaging the projection process is a matrix–vector product. • Carrying out this product implicitly as Fourier transforms reduces memory load. • Reconstructions are demonstrated from HAADF STEM and depth sectioning simulations

  7. Fine-scale mapping of vector habitats using very high resolution satellite imagery: a liver fluke case-study.

    Science.gov (United States)

    De Roeck, Els; Van Coillie, Frieke; De Wulf, Robert; Soenen, Karen; Charlier, Johannes; Vercruysse, Jozef; Hantson, Wouter; Ducheyne, Els; Hendrickx, Guy

    2014-12-01

    The visualization of vector occurrence in space and time is an important aspect of studying vector-borne diseases. Detailed maps of possible vector habitats provide valuable information for the prediction of infection risk zones but are currently lacking for most parts of the world. Nonetheless, monitoring vector habitats from the finest scales up to farm level is of key importance to refine currently existing broad-scale infection risk models. Using Fasciola hepatica, a parasite liver fluke, as a case in point, this study illustrates the potential of very high resolution (VHR) optical satellite imagery to efficiently and semi-automatically detect detailed vector habitats. A WorldView2 satellite image capable of transmitted by freshwater snails. The vector thrives in small water bodies (SWBs), such as ponds, ditches and other humid areas consisting of open water, aquatic vegetation and/or inundated grass. These water bodies can be as small as a few m2 and are most often not present on existing land cover maps because of their small size. We present a classification procedure based on object-based image analysis (OBIA) that proved valuable to detect SWBs at a fine scale in an operational and semi-automated way. The classification results were compared to field and other reference data such as existing broad-scale maps and expert knowledge. Overall, the SWB detection accuracy reached up to 87%. The resulting fine-scale SWB map can be used as input for spatial distribution modelling of the liver fluke snail vector to enable development of improved infection risk mapping and management advice adapted to specific, local farm situations.

  8. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (parallelization). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hideo; Kawai, Wataru; Nemoto, Toshiyuki [Fujitsu Ltd., Tokyo (Japan); and others

    1997-12-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the parallelization. In this parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. In the vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. And then, in the porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. (author)

  9. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (porting). Progress report fiscal 1996

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki [Fujitsu Ltd., Tokyo (Japan); Kawasaki, Nobuo; Tanabe, Hidenobu [and others

    1998-01-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the porting. In this porting part, the porting of reactor safety analysis code RELAP5/MOD3.2 and RELAP5/MOD3.2.1.2, nuclear data processing system NJOY and 2-D multigroup discrete ordinate transport code TWOTRAN-II are described. And also, a survey for the porting of command-driven interactive data analysis plotting program IPLOT are described. In the parallelization part, the parallelization of 2-Dimensional relativistic electromagnetic particle code EM2D, Cylindrical Direct Numerical Simulation code CYLDNS and molecular dynamics code for simulating radiation damages in diamond crystals DGR are described. And then, in the vectorization part, the vectorization of two and three dimensional discrete ordinates simulation code DORT-TORT, gas dynamics analysis code FLOWGR and relativistic Boltzmann-Uehling-Uhlenbeck simulation code RBUU are described. (author)

  10. ESPRIT-Like Two-Dimensional DOA Estimation for Monostatic MIMO Radar with Electromagnetic Vector Received Sensors under the Condition of Gain and Phase Uncertainties and Mutual Coupling.

    Science.gov (United States)

    Zhang, Dong; Zhang, Yongshun; Zheng, Guimei; Feng, Cunqian; Tang, Jun

    2017-10-26

    In this paper, we focus on the problem of two-dimensional direction of arrival (2D-DOA) estimation for monostatic MIMO Radar with electromagnetic vector received sensors (MIMO-EMVSs) under the condition of gain and phase uncertainties (GPU) and mutual coupling (MC). GPU would spoil the invariance property of the EMVSs in MIMO-EMVSs, thus the effective ESPRIT algorithm unable to be used directly. Then we put forward a C-SPD ESPRIT-like algorithm. It estimates the 2D-DOA and polarization station angle (PSA) based on the instrumental sensors method (ISM). The C-SPD ESPRIT-like algorithm can obtain good angle estimation accuracy without knowing the GPU. Furthermore, it can be applied to arbitrary array configuration and has low complexity for avoiding the angle searching procedure. When MC and GPU exist together between the elements of EMVSs, in order to make our algorithm feasible, we derive a class of separated electromagnetic vector receiver and give the S-SPD ESPRIT-like algorithm. It can solve the problem of GPU and MC efficiently. And the array configuration can be arbitrary. The effectiveness of our proposed algorithms is verified by the simulation result.

  11. ESPRIT-Like Two-Dimensional DOA Estimation for Monostatic MIMO Radar with Electromagnetic Vector Received Sensors under the Condition of Gain and Phase Uncertainties and Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-10-01

    Full Text Available In this paper, we focus on the problem of two-dimensional direction of arrival (2D-DOA estimation for monostatic MIMO Radar with electromagnetic vector received sensors (MIMO-EMVSs under the condition of gain and phase uncertainties (GPU and mutual coupling (MC. GPU would spoil the invariance property of the EMVSs in MIMO-EMVSs, thus the effective ESPRIT algorithm unable to be used directly. Then we put forward a C-SPD ESPRIT-like algorithm. It estimates the 2D-DOA and polarization station angle (PSA based on the instrumental sensors method (ISM. The C-SPD ESPRIT-like algorithm can obtain good angle estimation accuracy without knowing the GPU. Furthermore, it can be applied to arbitrary array configuration and has low complexity for avoiding the angle searching procedure. When MC and GPU exist together between the elements of EMVSs, in order to make our algorithm feasible, we derive a class of separated electromagnetic vector receiver and give the S-SPD ESPRIT-like algorithm. It can solve the problem of GPU and MC efficiently. And the array configuration can be arbitrary. The effectiveness of our proposed algorithms is verified by the simulation result.

  12. Investigating the Magnetic Imprints of Major Solar Eruptions with SDO /HMI High-cadence Vector Magnetograms

    Energy Technology Data Exchange (ETDEWEB)

    Sun Xudong; Hoeksema, J. Todd; Liu Yang; Chen Ruizhu [W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States); Kazachenko, Maria, E-mail: xudong@Sun.stanford.edu [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States)

    2017-04-10

    The solar active region photospheric magnetic field evolves rapidly during major eruptive events, suggesting appreciable feedback from the corona. Previous studies of these “magnetic imprints” are mostly based on line of sight only or lower-cadence vector observations; a temporally resolved depiction of the vector field evolution is hitherto lacking. Here, we introduce the high-cadence (90 s or 135 s) vector magnetogram data set from the Helioseismic and Magnetic Imager, which is well suited for investigating the phenomenon. These observations allow quantitative characterization of the permanent, step-like changes that are most pronounced in the horizontal field component (B {sub h}). A highly structured pattern emerges from analysis of an archetypical event, SOL2011-02-15T01:56, where B {sub h} near the main polarity inversion line increases significantly during the earlier phase of the associated flare with a timescale of several minutes, while B {sub h} in the periphery decreases at later times with smaller magnitudes and a slightly longer timescale. The data set also allows effective identification of the “magnetic transient” artifact, where enhanced flare emission alters the Stokes profiles and the inferred magnetic field becomes unreliable. Our results provide insights on the momentum processes in solar eruptions. The data set may also be useful to the study of sunquakes and data-driven modeling of the corona.

  13. INTERIM ANALYSIS OF THE CONTRIBUTION OF HIGH-LEVEL EVIDENCE FOR DENGUE VECTOR CONTROL.

    Science.gov (United States)

    Horstick, Olaf; Ranzinger, Silvia Runge

    2015-01-01

    This interim analysis reviews the available systematic literature for dengue vector control on three levels: 1) single and combined vector control methods, with existing work on peridomestic space spraying and on Bacillus thuringiensis israelensis; further work is available soon on the use of Temephos, Copepods and larvivorous fish; 2) or for a specific purpose, like outbreak control, and 3) on a strategic level, as for example decentralization vs centralization, with a systematic review on vector control organization. Clear best practice guidelines for methodology of entomological studies are needed. There is a need to include measuring dengue transmission data. The following recommendations emerge: Although vector control can be effective, implementation remains an issue; Single interventions are probably not useful; Combinations of interventions have mixed results; Careful implementation of vector control measures may be most important; Outbreak interventions are often applied with questionable effectiveness.

  14. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...

  15. Three dimensional (3d) transverse oscillation vector velocity ultrasound imaging

    DEFF Research Database (Denmark)

    2013-01-01

    as to traverse a field of view, and receive circuitry (306) configured to receive a two dimensional set of echoes produced in response to the ultrasound signal traversing structure in the field of view, wherein the structure includes flowing structures such as flowing blood cells, organ cells etc. A beamformer...

  16. Vector analysis

    CERN Document Server

    Newell, Homer E

    2006-01-01

    When employed with skill and understanding, vector analysis can be a practical and powerful tool. This text develops the algebra and calculus of vectors in a manner useful to physicists and engineers. Numerous exercises (with answers) not only provide practice in manipulation but also help establish students' physical and geometric intuition in regard to vectors and vector concepts.Part I, the basic portion of the text, consists of a thorough treatment of vector algebra and the vector calculus. Part II presents the illustrative matter, demonstrating applications to kinematics, mechanics, and e

  17. A covariant form of the Maxwell's equations in four-dimensional spaces with an arbitrary signature

    International Nuclear Information System (INIS)

    Lukac, I.

    1991-01-01

    The concept of duality in the four-dimensional spaces with the arbitrary constant metric is strictly mathematically formulated. A covariant model for covariant and contravariant bivectors in this space based on three four-dimensional vectors is proposed. 14 refs

  18. Performance and optimization of support vector machines in high-energy physics classification problems

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, M.Oe.; Kruecker, D.; Melzer-Pellmann, I.A.

    2016-01-15

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications. A new C++ LIBSVM interface called SVM-HINT is developed and available on Github.

  19. Performance and optimization of support vector machines in high-energy physics classification problems

    International Nuclear Information System (INIS)

    Sahin, M.Oe.; Kruecker, D.; Melzer-Pellmann, I.A.

    2016-01-01

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications. A new C++ LIBSVM interface called SVM-HINT is developed and available on Github.

  20. High-dimensional single-cell cancer biology.

    Science.gov (United States)

    Irish, Jonathan M; Doxie, Deon B

    2014-01-01

    Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.

  1. 3D Vector Velocity Estimation using a 2D Phased Array

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes; Jensen, Jørgen Arendt

    2011-01-01

    of using the TO method for estimation 3D velocity vectors, and the proposed decoupling is demonstrated. A 64x64 and a 32x32 elements transducer are emulated using Field II. Plug flow with a speed of 1 m/s in a small region is rotated in the XY -plane. A binary flow example with [vx,vy]=[1,0] and [0,1] m......A method to estimate the three dimensional (3D) velocity vector is presented is this paper. 3D velocity vector techniques are needed to measure the full velocity and characterize the complicated flow patterns in the human body. The Transverse Oscillation (TO) method introduces oscillations...... matrix transducer. For the 32x32 transducer, the mean and standard deviation for the speed are 0.94 0.11 m/s and for the angle bias -0.487.7. The simulation study clearly demonstrates, that the new method can be used to estimate the 3D velocity vector using a 2D phased matrix array, and that the velocity...

  2. Vector-Tensor and Vector-Vector Decay Amplitude Analysis of B0→φK*0

    International Nuclear Information System (INIS)

    Aubert, B.; Bona, M.; Boutigny, D.; Couderc, F.; Karyotakis, Y.; Lees, J. P.; Poireau, V.; Tisserand, V.; Zghiche, A.; Grauges, E.; Palano, A.; Chen, J. C.; Qi, N. D.; Rong, G.; Wang, P.; Zhu, Y. S.; Eigen, G.; Ofte, I.; Stugu, B.; Abrams, G. S.

    2007-01-01

    We perform an amplitude analysis of the decays B 0 →φK 2 * (1430) 0 , φK * (892) 0 , and φ(Kπ) S-wave 0 with a sample of about 384x10 6 BB pairs recorded with the BABAR detector. The fractions of longitudinal polarization f L of the vector-tensor and vector-vector decay modes are measured to be 0.853 -0.069 +0.061 ±0.036 and 0.506±0.040±0.015, respectively. Overall, twelve parameters are measured for the vector-vector decay and seven parameters for the vector-tensor decay, including the branching fractions and parameters sensitive to CP violation

  3. High-titer recombinant adeno-associated virus production utilizing a recombinant herpes simplex virus type I vector expressing AAV-2 Rep and Cap.

    Science.gov (United States)

    Conway, J E; Rhys, C M; Zolotukhin, I; Zolotukhin, S; Muzyczka, N; Hayward, G S; Byrne, B J

    1999-06-01

    Recombinant adeno-associated virus type 2 (rAAV) vectors have recently been used to achieve long-term, high level transduction in vivo. Further development of rAAV vectors for clinical use requires significant technological improvements in large-scale vector production. In order to facilitate the production of rAAV vectors, a recombinant herpes simplex virus type I vector (rHSV-1) which does not produce ICP27, has been engineered to express the AAV-2 rep and cap genes. The optimal dose of this vector, d27.1-rc, for AAV production has been determined and results in a yield of 380 expression units (EU) of AAV-GFP produced from 293 cells following transfection with AAV-GFP plasmid DNA. In addition, d27.1-rc was also efficient at producing rAAV from cell lines that have an integrated AAV-GFP provirus. Up to 480 EU/cell of AAV-GFP could be produced from the cell line GFP-92, a proviral, 293 derived cell line. Effective amplification of rAAV vectors introduced into 293 cells by infection was also demonstrated. Passage of rAAV with d27. 1-rc results in up to 200-fold amplification of AAV-GFP with each passage after coinfection of the vectors. Efficient, large-scale production (>109 cells) of AAV-GFP from a proviral cell line was also achieved and these stocks were free of replication-competent AAV. The described rHSV-1 vector provides a novel, simple and flexible way to introduce the AAV-2 rep and cap genes and helper virus functions required to produce high-titer rAAV preparations from any rAAV proviral construct. The efficiency and potential for scalable delivery of d27.1-rc to producer cell cultures should facilitate the production of sufficient quantities of rAAV vectors for clinical application.

  4. About vectors

    CERN Document Server

    Hoffmann, Banesh

    1975-01-01

    From his unusual beginning in ""Defining a vector"" to his final comments on ""What then is a vector?"" author Banesh Hoffmann has written a book that is provocative and unconventional. In his emphasis on the unresolved issue of defining a vector, Hoffmann mixes pure and applied mathematics without using calculus. The result is a treatment that can serve as a supplement and corrective to textbooks, as well as collateral reading in all courses that deal with vectors. Major topics include vectors and the parallelogram law; algebraic notation and basic ideas; vector algebra; scalars and scalar p

  5. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  6. High-Dimensional Quantum Information Processing with Linear Optics

    Science.gov (United States)

    Fitzpatrick, Casey A.

    Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for

  7. Two-dimensional calculus

    CERN Document Server

    Osserman, Robert

    2011-01-01

    The basic component of several-variable calculus, two-dimensional calculus is vital to mastery of the broader field. This extensive treatment of the subject offers the advantage of a thorough integration of linear algebra and materials, which aids readers in the development of geometric intuition. An introductory chapter presents background information on vectors in the plane, plane curves, and functions of two variables. Subsequent chapters address differentiation, transformations, and integration. Each chapter concludes with problem sets, and answers to selected exercises appear at the end o

  8. Vector network analyzer (VNA) measurements and uncertainty assessment

    CERN Document Server

    Shoaib, Nosherwan

    2017-01-01

    This book describes vector network analyzer measurements and uncertainty assessments, particularly in waveguide test-set environments, in order to establish their compatibility to the International System of Units (SI) for accurate and reliable characterization of communication networks. It proposes a fully analytical approach to measurement uncertainty evaluation, while also highlighting the interaction and the linear propagation of different uncertainty sources to compute the final uncertainties associated with the measurements. The book subsequently discusses the dimensional characterization of waveguide standards and the quality of the vector network analyzer (VNA) calibration techniques. The book concludes with an in-depth description of the novel verification artefacts used to assess the performance of the VNAs. It offers a comprehensive reference guide for beginners to experts, in both academia and industry, whose work involves the field of network analysis, instrumentation and measurements.

  9. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  10. A new approach to radiative transfer theory using Jones's vectors. I

    International Nuclear Information System (INIS)

    Fymat, A.L.; Vasudevan, R.

    1975-01-01

    Radiative transfer of partially polarized radiation in an anisotropically scattering, inhomogeneous atmosphere containing arbitrary polydispersion of particles is described using Jones's amplitude vectors and matrices. This novel approach exploits the close analogy between the quantum mechanical states of spin 1/2 systems and the polarization states of electromagnetic radiation described by Jones's vector, and draws on the methodology of such spin 1/2 systems. The complete equivalence between the transport equation for Jones's vectors and the classical radiative transfer equation for Stokes's intensity vectors is demonstrated in two independent ways after deriving the transport equations for the polarization coherency matrices and for the quaternions corresponding to the Jones's vectors. A compact operator formulation of the theory is provided, and used to derive the necessary equations for both a local and a global description of the transport of Jones's vectors. Lastly, the integro-differential equations for the amplitude reflection and transmission matrices are derived, and related to the usual corresponding equations. The present formulation is the most succinct and the most convenient one for both theoretical and experimental studies. It yields a simpler analysis than the classical formulation since it reduces by a factor of two the dimensionality of transfer problems. It preserves information on phases, and thus can be used directly across the entire electromagnetic spectrum without any further conversion into intensities. (Auth.)

  11. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  12. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca

    2013-01-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  13. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza

    2013-08-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  14. Coupling genetics and proteomics to identify aphid proteins associated with vector-specific transmission of polerovirus (luteoviridae).

    Science.gov (United States)

    Yang, Xiaolong; Thannhauser, T W; Burrows, Mary; Cox-Foster, Diana; Gildow, Fred E; Gray, Stewart M

    2008-01-01

    Cereal yellow dwarf virus-RPV (CYDV-RPV) is transmitted specifically by the aphids Rhopalosiphum padi and Schizaphis graminum in a circulative nonpropagative manner. The high level of vector specificity results from the vector aphids having the functional components of the receptor-mediated endocytotic pathways to allow virus to transverse the gut and salivary tissues. Studies of F(2) progeny from crosses of vector and nonvector genotypes of S. graminum showed that virus transmission efficiency is a heritable trait regulated by multiple genes acting in an additive fashion and that gut- and salivary gland-associated factors are not genetically linked. Utilizing two-dimensional difference gel electrophoresis to compare the proteomes of vector and nonvector parental and F(2) genotypes, four aphid proteins (S4, S8, S29, and S405) were specifically associated with the ability of S. graminum to transmit CYDV-RPV. The four proteins were coimmunoprecipitated with purified RPV, indicating that the aphid proteins are capable of binding to virus. Analysis by mass spectrometry identified S4 as a luciferase and S29 as a cyclophilin, both of which have been implicated in macromolecular transport. Proteins S8 and S405 were not identified from available databases. Study of this unique genetic system coupled with proteomic analysis indicated that these four virus-binding aphid proteins were specifically inherited and conserved in different generations of vector genotypes and suggests that they play a major role in regulating polerovirus transmission.

  15. Vectors

    DEFF Research Database (Denmark)

    Boeriis, Morten; van Leeuwen, Theo

    2017-01-01

    should be taken into account in discussing ‘reactions’, which Kress and van Leeuwen link only to eyeline vectors. Finally, the question can be raised as to whether actions are always realized by vectors. Drawing on a re-reading of Rudolf Arnheim’s account of vectors, these issues are outlined......This article revisits the concept of vectors, which, in Kress and van Leeuwen’s Reading Images (2006), plays a crucial role in distinguishing between ‘narrative’, action-oriented processes and ‘conceptual’, state-oriented processes. The use of this concept in image analysis has usually focused...

  16. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  17. Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids

    NARCIS (Netherlands)

    bin Zubair, H.; Oosterlee, C.E.; Wienands, R.

    2006-01-01

    This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We

  18. Improved Coinfection with Amphotropic Pseudotyped Retroviral Vectors

    Directory of Open Access Journals (Sweden)

    Yuehong Wu

    2009-01-01

    Full Text Available Amphotropic pseudotyped retroviral vectors have typically been used to infect target cells without prior concentration. Although this can yield high rates of infection, higher rates may be needed where highly efficient coinfection of two or more vectors is needed. In this investigation we used amphotropic retroviral vectors produced by the Plat-A cell line and studied coinfection rates using green and red fluorescent proteins (EGFP and dsRed2. Target cells were primary human fibroblasts (PHF and 3T3 cells. Unconcentrated vector preparations produced a coinfection rate of ∼4% (defined as cells that are both red and green as a percentage of all cells infected. Optimized spinoculation, comprising centrifugation at 1200 g for 2 hours at 15∘C, increased the coinfection rate to ∼10%. Concentration by centrifugation at 10,000 g or by flocculation using Polybrene increased the coinfection rate to ∼25%. Combining the two processes, concentration by Polybrene flocculation and optimized spinoculation, increased the coinfection rate to 35% (3T3 or >50% (PHF. Improved coinfection should be valuable in protocols that require high transduction by combinations of two or more retroviral vectors.

  19. Interface between path and orbital angular momentum entanglement for high-dimensional photonic quantum information.

    Science.gov (United States)

    Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton

    2014-07-30

    Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.

  20. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  1. Fusion rule estimation using vector space methods

    International Nuclear Information System (INIS)

    Rao, N.S.V.

    1997-01-01

    In a system of N sensors, the sensor S j , j = 1, 2 .... N, outputs Y (j) element-of Re, according to an unknown probability distribution P (Y(j) /X) , corresponding to input X element-of [0, 1]. A training n-sample (X 1 , Y 1 ), (X 2 , Y 2 ), ..., (X n , Y n ) is given where Y i = (Y i (1) , Y i (2) , . . . , Y i N ) such that Y i (j) is the output of S j in response to input X i . The problem is to estimate a fusion rule f : Re N → [0, 1], based on the sample, such that the expected square error is minimized over a family of functions Y that constitute a vector space. The function f* that minimizes the expected error cannot be computed since the underlying densities are unknown, and only an approximation f to f* is feasible. We estimate the sample size sufficient to ensure that f provides a close approximation to f* with a high probability. The advantages of vector space methods are two-fold: (a) the sample size estimate is a simple function of the dimensionality of F, and (b) the estimate f can be easily computed by well-known least square methods in polynomial time. The results are applicable to the classical potential function methods and also (to a recently proposed) special class of sigmoidal feedforward neural networks

  2. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  3. Elucidating high-dimensional cancer hallmark annotation via enriched ontology.

    Science.gov (United States)

    Yan, Shankai; Wong, Ka-Chun

    2017-09-01

    Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  5. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    International Nuclear Information System (INIS)

    Zhang, Yuxiao; Zhang, Jianming; Liu, Yang; Huang, Hui; Kang, Zhenhui

    2012-01-01

    Highlights: ► Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. ► MPCS was covalently modified by cysteine (MPCS–CO–Cys). ► MPCS–CO–Cys was first time used in electrochemical detection of heavy metal ions. ► Heavy metal ions such as Pb 2+ and Cd 2+ can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  6. New possibilities for testing local realism in high energy physics

    International Nuclear Information System (INIS)

    Li Junli; Qiao Congfeng

    2009-01-01

    The three photons from the dominant ortho-positronium decay and two vector mesons from the η c exclusive decays are found to be in tripartite and high-dimensional entangled states, respectively. These two classes of entangled states possess the Hardy type nonlocality and allow a priori for quantum mechanics vs local realism test via Bell inequalities. The experimental realizations are shown to be feasible, and a concrete scheme to fulfill the test in experiment via two-vector-meson entangled state is proposed.

  7. An adaptive mode-driven spatiotemporal motion vector prediction for wavelet video coding

    Science.gov (United States)

    Zhao, Fan; Liu, Guizhong; Qi, Yong

    2010-07-01

    The three-dimensional subband/wavelet codecs use 5/3 filters rather than Haar filters for the motion compensation temporal filtering (MCTF) to improve the coding gain. In order to curb the increased motion vector rate, an adaptive motion mode driven spatiotemporal motion vector prediction (AMDST-MVP) scheme is proposed. First, by making use of the direction histograms of four motion vector fields resulting from the initial spatial motion vector prediction (SMVP), the motion mode of the current GOP is determined according to whether the fast or complex motion exists in the current GOP. Then the GOP-level MVP scheme is thereby determined by either the S-MVP or the AMDST-MVP, namely, AMDST-MVP is the combination of S-MVP and temporal-MVP (T-MVP). If the latter is adopted, the motion vector difference (MVD) between the neighboring MV fields and the S-MVP resulting MV of the current block is employed to decide whether or not the MV of co-located block in the previous frame is used for prediction the current block. Experimental results show that AMDST-MVP not only can improve the coding efficiency but also reduce the number of computation complexity.

  8. Exact solution of the N-dimensional generalized Dirac-Coulomb equation

    International Nuclear Information System (INIS)

    Tutik, R.S.

    1992-01-01

    An exact solution to the bound state problem for the N-dimensional generalized Dirac-Coulomb equation, whose potential contains both the Lorentz-vector and Lorentz-scalar terms of the Coulomb form, is obtained. 24 refs. (author)

  9. Vectorization of nuclear codes 90-1

    International Nuclear Information System (INIS)

    Nonomiya, Iwao; Nemoto, Toshiyuki; Ishiguro, Misako; Harada, Hiroo; Hori, Takeo.

    1990-09-01

    The vectorization has been made for four codes: SONATINA-2V HTTR version, TRIDOSE, VIENUS, and SCRYU. SONATINA-2V HTTR version is a code for analyzing the dynamic behavior of fuel blocks in the vertical slice of the HTGR (High Temperature Gas-cooled Reactor) core under seismic perturbation, TRIDOSE is a code for calculating environmental tritium concentration and dose, VIENUS is a code for analyzing visco elastic stress of the fuel block of HTTR (High Temperature gas-cooled Test Reactor), and SCRYU is a thermal-hydraulics code with boundary fitted coordinate system. The total speedup ratio of the vectorized versions to the original scalar ones is 5.2 for SONATINA-2V HTTR version. 5.9 ∼ 6.9 for TRIDOSE, 6.7 for VIENUS, 7.6 for SCRYU, respectively. In this report, we describe outline of codes, techniques used for the vectorization, verification of computed results, and speedup effect on the vectorized codes. (author)

  10. Three-dimensional stellarator equilibrium as an ohmic steady state

    International Nuclear Information System (INIS)

    Park, W.; Monticello, D.A.; Strauss, H.; Manickam, J.

    1985-07-01

    A stable three-dimensional stellarator equilibrium can be obtained numerically by a time-dependent relaxation method using small values of dissipation. The final state is an ohmic steady state which approaches an ohmic equilibrium in the limit of small dissipation coefficients. We describe a method to speed up the relaxation process and a method to implement the B vector . del p = 0 condition. These methods are applied to obtain three-dimensional heliac equilibria using the reduced heliac equations

  11. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    Science.gov (United States)

    Koulouri, Alexandra; Brookes, Mike; Rimpiläinen, Ville

    2017-01-01

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field.

  12. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  13. An episomal vector-based CRISPR/Cas9 system for highly efficient gene knockout in human pluripotent stem cells.

    Science.gov (United States)

    Xie, Yifang; Wang, Daqi; Lan, Feng; Wei, Gang; Ni, Ting; Chai, Renjie; Liu, Dong; Hu, Shijun; Li, Mingqing; Li, Dajin; Wang, Hongyan; Wang, Yongming

    2017-05-24

    Human pluripotent stem cells (hPSCs) represent a unique opportunity for understanding the molecular mechanisms underlying complex traits and diseases. CRISPR/Cas9 is a powerful tool to introduce genetic mutations into the hPSCs for loss-of-function studies. Here, we developed an episomal vector-based CRISPR/Cas9 system, which we called epiCRISPR, for highly efficient gene knockout in hPSCs. The epiCRISPR system enables generation of up to 100% Insertion/Deletion (indel) rates. In addition, the epiCRISPR system enables efficient double-gene knockout and genomic deletion. To minimize off-target cleavage, we combined the episomal vector technology with double-nicking strategy and recent developed high fidelity Cas9. Thus the epiCRISPR system offers a highly efficient platform for genetic analysis in hPSCs.

  14. Two-dimensional electroacoustic waves in silicene

    Science.gov (United States)

    Zhukov, Alexander V.; Bouffanais, Roland; Konobeeva, Natalia N.; Belonenko, Mikhail B.

    2018-01-01

    In this letter, we investigate the propagation of two-dimensional electromagnetic waves in a piezoelectric medium built upon silicene. Ultrashort optical pulses of Gaussian form are considered to probe this medium. On the basis of Maxwell's equations supplemented with the wave equation for the medium's displacement vector, we obtain the effective governing equation for the vector potential associated with the electromagnetic field, as well as the component of the displacement vector. The dependence of the pulse shape on the bandgap in silicene and the piezoelectric coefficient of the medium was analyzed, thereby revealing a nontrivial triadic interplay between the characteristics of the pulse dynamics, the electronic properties of silicene, and the electrically induced mechanical vibrations of the medium. In particular, we uncovered the possibility for an amplification of the pulse amplitude through the tuning of the piezoelectric coefficient. This property could potentially offer promising prospects for the development of amplification devices for the optoelectronics industry.

  15. Elementary vectors

    CERN Document Server

    Wolstenholme, E Œ

    1978-01-01

    Elementary Vectors, Third Edition serves as an introductory course in vector analysis and is intended to present the theoretical and application aspects of vectors. The book covers topics that rigorously explain and provide definitions, principles, equations, and methods in vector analysis. Applications of vector methods to simple kinematical and dynamical problems; central forces and orbits; and solutions to geometrical problems are discussed as well. This edition of the text also provides an appendix, intended for students, which the author hopes to bridge the gap between theory and appl

  16. High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems

    International Nuclear Information System (INIS)

    Wachowiak, M P; Sarlo, B B; Foster, A E Lambe

    2014-01-01

    Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task

  17. A hybridized K-means clustering approach for high dimensional ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.

  18. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    Science.gov (United States)

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  19. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  20. Vectors of subsurface stormflow in a layered hillslope during runoff initiation

    Directory of Open Access Journals (Sweden)

    M. Retter

    2006-01-01

    Full Text Available The focus is the experimental assessment of in-situ flow vectors in a hillslope soil. We selected a 100 m2 trenched hillslope study site. During prescribed sprinkling an obliquely installed TDR wave-guide provides for the velocity of the wetting front in its direction. A triplet of wave-guides mounted along the sides of an hypothetical tetrahedron, with its peak pointing down, produces a three-dimensional vector of the wetting front. The method is based on the passing of wetting fronts. We analysed 34 vectors along the hillslope at distributed locations and at soil depths from 11 cm (representing top soil to 40 cm (close to bedrock interface. The mean values resulted as follows vx=16.1 mm min-1, vy=-0.2 mm min-1, and vz=11.9 mm min-1. The velocity vectors of the wetting fronts were generally gravity dominated and downslope orientated. Downslope direction (x-axis dominated close to bedrock, whereas no preference between vertical and downslope direction was found in vectors close to the surface. The velocities along the contours (y-axis varied widely. The Kruskal-Wallis tests indicated that the different upslope sprinkling areas had no influence on the orientation of the vectors. Vectors of volume flux density were also calculated for each triplet. The lateral velocities of the vector approach are compared with subsurface stromflow collected at the downhill end of the slope. Velocities were 25-140 times slower than lateral saturated tracer movements on top of the bedrock. Beside other points, we conclude that this method is restricted to non-complex substrate (skeleton or portion of big stones.

  1. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  2. Vectorization, parallelization and porting of nuclear codes on the VPP500 system (porting). Progress report fiscal 1997

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawai, Wataru; Watanabe, Hideo; Tanabe, Hidenobu; Kawasaki, Nobuo; Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo

    1999-05-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system and/or the AP3000 system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 14 codes in fiscal 1997. These results are reported in 3 parts, i.e., the vectorization part, the parallelization part and the porting part. In this report, we describe the porting. In this porting part, the porting of transient reactor analysis code TRAC-BF1 and Monte Carlo radiation transport code MCNP4A on the AP3000 are described. In addition, a modification of program libraries for command-driven interactive data analysis plotting program IPLOT is described. In the vectorization part, the vectorization of multidimensional two-fluid model code ACE-3D for evaluation of constitutive equations, statistical decay code SD and three-dimensional thermal analysis code for in-core test section (T2) of HENDEL SSPHEAT are described. In the parallelization part, the parallelization of cylindrical direct numerical simulation code CYLDNS44N, worldwide version of system for prediction of environmental emergency dose information code WSPEEDI, extension of quantum molecular dynamics code EQMD and three-dimensional non-steady compressible fluid dynamics code STREAM are described. (author)

  3. Three-dimensional rail-current distribution near the armature of simple, square-bore, two-rail railguns

    International Nuclear Information System (INIS)

    Beno, J.H.

    1991-01-01

    In this paper vector potential is solved as a three dimensional, boundary value problem for a conductor geometry consisting of square-bore railgun rails and a stationary armature. Conductors are infinitely conducting and perfect contact is assumed between rails and the armature. From the vector potential solution, surface current distribution is inferred

  4. Coupling Genetics and Proteomics To Identify Aphid Proteins Associated with Vector-Specific Transmission of Polerovirus (Luteoviridae)▿

    Science.gov (United States)

    Yang, Xiaolong; Thannhauser, T. W.; Burrows, Mary; Cox-Foster, Diana; Gildow, Fred E.; Gray, Stewart M.

    2008-01-01

    Cereal yellow dwarf virus-RPV (CYDV-RPV) is transmitted specifically by the aphids Rhopalosiphum padi and Schizaphis graminum in a circulative nonpropagative manner. The high level of vector specificity results from the vector aphids having the functional components of the receptor-mediated endocytotic pathways to allow virus to transverse the gut and salivary tissues. Studies of F2 progeny from crosses of vector and nonvector genotypes of S. graminum showed that virus transmission efficiency is a heritable trait regulated by multiple genes acting in an additive fashion and that gut- and salivary gland-associated factors are not genetically linked. Utilizing two-dimensional difference gel electrophoresis to compare the proteomes of vector and nonvector parental and F2 genotypes, four aphid proteins (S4, S8, S29, and S405) were specifically associated with the ability of S. graminum to transmit CYDV-RPV. The four proteins were coimmunoprecipitated with purified RPV, indicating that the aphid proteins are capable of binding to virus. Analysis by mass spectrometry identified S4 as a luciferase and S29 as a cyclophilin, both of which have been implicated in macromolecular transport. Proteins S8 and S405 were not identified from available databases. Study of this unique genetic system coupled with proteomic analysis indicated that these four virus-binding aphid proteins were specifically inherited and conserved in different generations of vector genotypes and suggests that they play a major role in regulating polerovirus transmission. PMID:17959668

  5. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuxiao; Zhang, Jianming [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Liu, Yang, E-mail: yangl@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Huang, Hui [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Kang, Zhenhui, E-mail: zhkang@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. Black-Right-Pointing-Pointer MPCS was covalently modified by cysteine (MPCS-CO-Cys). Black-Right-Pointing-Pointer MPCS-CO-Cys was first time used in electrochemical detection of heavy metal ions. Black-Right-Pointing-Pointer Heavy metal ions such as Pb{sup 2+} and Cd{sup 2+} can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  6. Irregular grid methods for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.

    2004-01-01

    This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of

  7. Topological superconductor in quasi-one-dimensional Tl2 -xMo6Se6

    Science.gov (United States)

    Huang, Shin-Ming; Hsu, Chuang-Han; Xu, Su-Yang; Lee, Chi-Cheng; Shiau, Shiue-Yuan; Lin, Hsin; Bansil, Arun

    2018-01-01

    We propose that the quasi-one-dimensional molybdenum selenide compound Tl2 -xMo6Se6 is a time-reversal-invariant topological superconductor induced by intersublattice pairing, even in the absence of spin-orbit coupling (SOC). No noticeable change in superconductivity is observed in Tl-deficient (0 ≤x ≤0.1 ) compounds. At weak SOC, the superconductor prefers the triplet d vector lying perpendicular to the chain direction and two-dimensional E2 u symmetry, which is driven to a nematic order by spontaneous rotation symmetry breaking. The locking energy of the d vector is estimated to be weak and hence the proof of its direction would rely on tunneling or phase-sensitive measurements.

  8. High-dimensional atom localization via spontaneously generated coherence in a microwave-driven atomic system.

    Science.gov (United States)

    Wang, Zhiping; Chen, Jinyu; Yu, Benli

    2017-02-20

    We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.

  9. Mitigating the Insider Threat Using High-Dimensional Search and Modeling

    National Research Council Canada - National Science Library

    Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago

    2006-01-01

    In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...

  10. Canonical Groups for Quantization on the Two-Dimensional Sphere and One-Dimensional Complex Projective Space

    International Nuclear Information System (INIS)

    Sumadi A H A; H, Zainuddin

    2014-01-01

    Using Isham's group-theoretic quantization scheme, we construct the canonical groups of the systems on the two-dimensional sphere and one-dimensional complex projective space, which are homeomorphic. In the first case, we take SO(3) as the natural canonical Lie group of rotations of the two-sphere and find all the possible Hamiltonian vector fields, and followed by verifying the commutator and Poisson bracket algebra correspondences with the Lie algebra of the group. In the second case, the same technique is resumed to define the Lie group, in this case SU (2), of CP'.We show that one can simply use a coordinate transformation from S 2 to CP 1 to obtain all the Hamiltonian vector fields of CP 1 . We explicitly show that the Lie algebra structures of both canonical groups are locally homomorphic. On the other hand, globally their corresponding canonical groups are acting on different geometries, the latter of which is almost complex. Thus the canonical group for CP 1 is the double-covering group of SO(3), namely SU(2). The relevance of the proposed formalism is to understand the idea of CP 1 as a space of where the qubit lives which is known as a Bloch sphere

  11. Adaptive nonseparable vector lifting scheme for digital holographic data compression.

    Science.gov (United States)

    Xing, Yafei; Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Dufaux, Frédéric

    2015-01-01

    Holographic data play a crucial role in recent three-dimensional imaging as well as microscopic applications. As a result, huge amounts of storage capacity will be involved for this kind of data. Therefore, it becomes necessary to develop efficient hologram compression schemes for storage and transmission purposes. In this paper, we focus on the shifted distance information, obtained by the phase-shifting algorithm, where two sets of difference data need to be encoded. More precisely, a nonseparable vector lifting scheme is investigated in order to exploit the two-dimensional characteristics of the holographic contents. Simulations performed on different digital holograms have shown the effectiveness of the proposed method in terms of bitrate saving and quality of object reconstruction.

  12. Three-dimensional analysis of eddy current with the finite element method

    International Nuclear Information System (INIS)

    Takano, Ichiro; Suzuki, Yasuo

    1977-05-01

    The finite element method is applied to three-dimensional analysis of eddy current induced in a large Tokamak device (JT-60). Two techniques to study the eddy current are presented: those of ordinary vector potential and modified vector potential. The latter is originally developed for decreasing dimension of the global matrix. Theoretical treatment of these two is given. The skin effect for alternate current flowing in the circular loop of rectangular cross section is examined as an example of the modified vector potential technique, and the result is compared with analytical one. This technique is useful in analysis of the eddy current problem. (auth.)

  13. Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids

    International Nuclear Information System (INIS)

    Jakeman, John D.; Archibald, Richard; Xiu Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  14. Effective SIMD Vectorization for Intel Xeon Phi Coprocessors

    Directory of Open Access Journals (Sweden)

    Xinmin Tian

    2015-01-01

    Full Text Available Efficiently exploiting SIMD vector units is one of the most important aspects in achieving high performance of the application code running on Intel Xeon Phi coprocessors. In this paper, we present several effective SIMD vectorization techniques such as less-than-full-vector loop vectorization, Intel MIC specific alignment optimization, and small matrix transpose/multiplication 2D vectorization implemented in the Intel C/C++ and Fortran production compilers for Intel Xeon Phi coprocessors. A set of workloads from several application domains is employed to conduct the performance study of our SIMD vectorization techniques. The performance results show that we achieved up to 12.5x performance gain on the Intel Xeon Phi coprocessor. We also demonstrate a 2000x performance speedup from the seamless integration of SIMD vectorization and parallelization.

  15. Performance and optimization of support vector machines in high-energy physics classification problems

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, Mehmet Oezguer; Kruecker, Dirk; Melzer-Pellmann, Isabell [DESY, Hamburg (Germany)

    2016-07-01

    In this talk, the use of Support Vector Machines (SVM) is promoted for new-physics searches in high-energy physics. We developed an interface, called SVM HEP Interface (SVM-HINT), for a popular SVM library, LibSVM, and introduced a statistical-significance based hyper-parameter optimization algorithm for the new-physics searches. As example case study, a search for Supersymmetry at the Large Hadron Collider is given to demonstrate the capabilities of SVM using SVM-HINT.

  16. Pricing High-Dimensional American Options Using Local Consistency Conditions

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and

  17. Numerical simulation of multi-dimensional two-phase flow based on flux vector splitting

    Energy Technology Data Exchange (ETDEWEB)

    Staedtke, H.; Franchello, G.; Worth, B. [Joint Research Centre - Ispra Establishment (Italy)

    1995-09-01

    This paper describes a new approach to the numerical simulation of transient, multidimensional two-phase flow. The development is based on a fully hyperbolic two-fluid model of two-phase flow using separated conservation equations for the two phases. Features of the new model include the existence of real eigenvalues, and a complete set of independent eigenvectors which can be expressed algebraically in terms of the major dependent flow parameters. This facilitates the application of numerical techniques specifically developed for high speed single-phase gas flows which combine signal propagation along characteristic lines with the conservation property with respect to mass, momentum and energy. Advantages of the new model for the numerical simulation of one- and two- dimensional two-phase flow are discussed.

  18. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    International Nuclear Information System (INIS)

    Hayashi, Y.; Hirose, Y.; Seno, Y.

    2016-01-01

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 "3 voxels was obtained.

  19. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Y., E-mail: y-hayashi@mosk.tytlabs.co.jp; Hirose, Y.; Seno, Y. [Toyota Central R& D Toyota Central R& D Labs., Inc., 41-1 Nagakute Aichi 480-1192 Japan (Japan)

    2016-07-27

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 {sup 3} voxels was obtained.

  20. Many electron variational ground state of the two dimensional Anderson lattice

    International Nuclear Information System (INIS)

    Zhou, Y.; Bowen, S.P.; Mancini, J.D.

    1991-02-01

    A variational upper bound of the ground state energy of two dimensional finite Anderson lattices is determined as a function of lattice size (up to 16 x 16). Two different sets of many-electron basis vectors are used to determine the ground state for all values of the coulomb integral U. This variational scheme has been successfully tested for one dimensional models and should give good estimates in two dimensions

  1. Maximal slicing of D-dimensional spherically symmetric vacuum spacetime

    International Nuclear Information System (INIS)

    Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru

    2009-01-01

    We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D≥5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.

  2. Genetic manipulation of endosymbionts to control vector and vector borne diseases

    Directory of Open Access Journals (Sweden)

    Jay Prakash Gupta

    Full Text Available Vector borne diseases (VBD are on the rise because of failure of the existing methods of control of vector and vector borne diseases and the climate change. A steep rise of VBDs are due to several factors like selection of insecticide resistant vector population, drug resistant parasite population and lack of effective vaccines against the VBDs. Environmental pollution, public health hazard and insecticide resistant vector population indicate that the insecticides are no longer a sustainable control method of vector and vector-borne diseases. Amongst the various alternative control strategies, symbiont based approach utilizing endosymbionts of arthropod vectors could be explored to control the vector and vector borne diseases. The endosymbiont population of arthropod vectors could be exploited in different ways viz., as a chemotherapeutic target, vaccine target for the control of vectors. Expression of molecules with antiparasitic activity by genetically transformed symbiotic bacteria of disease-transmitting arthropods may serve as a powerful approach to control certain arthropod-borne diseases. Genetic transformation of symbiotic bacteria of the arthropod vector to alter the vector’s ability to transmit pathogen is an alternative means of blocking the transmission of VBDs. In Indian scenario, where dengue, chikungunya, malaria and filariosis are prevalent, paratransgenic based approach can be used effectively. [Vet World 2012; 5(9.000: 571-576

  3. Emerging vector borne diseases – incidence through vectors

    Directory of Open Access Journals (Sweden)

    Sara eSavic

    2014-12-01

    Full Text Available Vector borne diseases use to be a major public health concern only in tropical and subtropical areas, but today they are an emerging threat for the continental and developed countries also. Nowdays, in intercontinetal countries, there is a struggle with emerging diseases which have found their way to appear through vectors. Vector borne zoonotic diseases occur when vectors, animal hosts, climate conditions, pathogens and susceptible human population exist at the same time, at the same place. Global climate change is predicted to lead to an increase in vector borne infectious diseases and disease outbreaks. It could affect the range and popultion of pathogens, host and vectors, transmission season, etc. Reliable surveilance for diseases that are most likely to emerge is required. Canine vector borne diseases represent a complex group of diseases including anaplasmosis, babesiosis, bartonellosis, borreliosis, dirofilariosis, erlichiosis, leishmaniosis. Some of these diseases cause serious clinical symptoms in dogs and some of them have a zoonotic potential with an effect to public health. It is expected from veterinarians in coordination with medical doctors to play a fudamental role at primeraly prevention and then treatment of vector borne diseases in dogs. The One Health concept has to be integrated into the struggle against emerging diseases.During a four year period, from 2009-2013, a total number of 551 dog samples were analysed for vector borne diseases (borreliosis, babesiosis, erlichiosis, anaplasmosis, dirofilariosis and leishmaniasis in routine laboratory work. The analysis were done by serological tests – ELISA for borreliosis, dirofilariosis and leishmaniasis, modified Knott test for dirofilariosis and blood smear for babesiosis, erlichiosis and anaplasmosis. This number of samples represented 75% of total number of samples that were sent for analysis for different diseases in dogs. Annually, on avarege more then half of the samples

  4. Preface [HD3-2015: International meeting on high-dimensional data-driven science

    International Nuclear Information System (INIS)

    2016-01-01

    A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)

  5. Predicting High Frequency Exchange Rates using Machine Learning

    OpenAIRE

    Palikuca, Aleksandar; Seidl,, Timo

    2016-01-01

    This thesis applies a committee of Artificial Neural Networks and Support Vector Machines on high-dimensional, high-frequency EUR/USD exchange rate data in an effort to predict directional market movements on up to a 60 second prediction horizon. The study shows that combining multiple classifiers into a committee produces improved precision relative to the best individual committee members and outperforms previously reported results. A trading simulation implementing the committee classifier...

  6. Vector-vector production in photon-photon interactions

    International Nuclear Information System (INIS)

    Ronan, M.T.

    1988-01-01

    Measurements of exclusive untagged /rho/ 0 /rho/ 0 , /rho//phi/, K/sup *//bar K//sup */, and /rho/ω production and tagged /rho/ 0 /rho/ 0 production in photon-photon interactions by the TPC/Two-Gamma experiment are reviewed. Comparisons to the results of other experiments and to models of vector-vector production are made. Fits to the data following a four quark model prescription for vector meson pair production are also presented. 10 refs., 9 figs

  7. Three-dimensional magnetic properties of soft magnetic composite materials

    International Nuclear Information System (INIS)

    Lin, Z.W.; Zhu, J.G.

    2007-01-01

    A three-dimensional (3-D) magnetic property measurement system, which can control the three components of the magnetic flux density B vector and measure the magnetic field strength H vector in a cubic sample of soft magnetic material, has been developed and calibrated. This paper studies the relationship between the B and H loci in 3-D space, and the power losses features of a soft magnetic composite when the B loci are controlled to be circles with increasing magnitudes and ellipses evolving from a straight line to circle in three orthogonal planes. It is found that the B and H loci lie in the same magnetization plane, but the H loci and power losses strongly depend on the orientation, position, and process of magnetization. On the other hand, the H vector evolves into a unique locus, and the power loss approaches a unique value, respectively, when the B vector evolves into the round locus with the same magnitude from either a series of circles or ellipses

  8. Class prediction for high-dimensional class-imbalanced data

    Directory of Open Access Journals (Sweden)

    Lusa Lara

    2010-10-01

    Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class

  9. "Lollipop-shaped" high-sensitivity Microelectromechanical Systems vector hydrophone based on Parylene encapsulation

    Science.gov (United States)

    Liu, Yuan; Wang, Renxin; Zhang, Guojun; Du, Jin; Zhao, Long; Xue, Chenyang; Zhang, Wendong; Liu, Jun

    2015-07-01

    This paper presents methods of promoting the sensitivity of Microelectromechanical Systems (MEMS) vector hydrophone by increasing the sensing area of cilium and perfect insulative Parylene membrane. First, a low-density sphere is integrated with the cilium to compose a "lollipop shape," which can considerably increase the sensing area. A mathematic model on the sensitivity of the "lollipop-shaped" MEMS vector hydrophone is presented, and the influences of different structural parameters on the sensitivity are analyzed via simulation. Second, the MEMS vector hydrophone is encapsulated through the conformal deposition of insulative Parylene membrane, which enables underwater acoustic monitoring without any typed sound-transparent encapsulation. Finally, the characterization results demonstrate that the sensitivity reaches up to -183 dB (500 Hz 0dB at 1 V/ μPa ), which is increased by more than 10 dB, comparing with the previous cilium-shaped MEMS vector hydrophone. Besides, the frequency response takes on a sensitivity increment of 6 dB per octave. The working frequency band is 20-500 Hz and the concave point depth of 8-shaped directivity is beyond 30 dB, indicating that the hydrophone is promising in underwater acoustic application.

  10. Bandgap optimization of two-dimensional photonic crystals using semidefinite programming and subspace methods

    International Nuclear Information System (INIS)

    Men, H.; Nguyen, N.C.; Freund, R.M.; Parrilo, P.A.; Peraire, J.

    2010-01-01

    In this paper, we consider the optimal design of photonic crystal structures for two-dimensional square lattices. The mathematical formulation of the bandgap optimization problem leads to an infinite-dimensional Hermitian eigenvalue optimization problem parametrized by the dielectric material and the wave vector. To make the problem tractable, the original eigenvalue problem is discretized using the finite element method into a series of finite-dimensional eigenvalue problems for multiple values of the wave vector parameter. The resulting optimization problem is large-scale and non-convex, with low regularity and non-differentiable objective. By restricting to appropriate eigenspaces, we reduce the large-scale non-convex optimization problem via reparametrization to a sequence of small-scale convex semidefinite programs (SDPs) for which modern SDP solvers can be efficiently applied. Numerical results are presented for both transverse magnetic (TM) and transverse electric (TE) polarizations at several frequency bands. The optimized structures exhibit patterns which go far beyond typical physical intuition on periodic media design.

  11. Three-dimensional true FISP for high-resolution imaging of the whole brain

    International Nuclear Information System (INIS)

    Schmitz, B.; Hagen, T.; Reith, W.

    2003-01-01

    While high-resolution T1-weighted sequences, such as three-dimensional magnetization-prepared rapid gradient-echo imaging, are widely available, there is a lack of an equivalent fast high-resolution sequence providing T2 contrast. Using fast high-performance gradient systems we show the feasibility of three-dimensional true fast imaging with steady-state precession (FISP) to fill this gap. We applied a three-dimensional true-FISP protocol with voxel sizes down to 0.5 x 0.5 x 0.5 mm and acquisition times of approximately 8 min on a 1.5-T Sonata (Siemens, Erlangen, Germany) magnetic resonance scanner. The sequence was included into routine brain imaging protocols for patients with cerebrospinal-fluid-related intracranial pathology. Images from 20 patients and 20 healthy volunteers were evaluated by two neuroradiologists with respect to diagnostic image quality and artifacts. All true-FISP scans showed excellent imaging quality free of artifacts in patients and volunteers. They were valuable for the assessment of anatomical and pathologic aspects of the included patients. High-resolution true-FISP imaging is a valuable adjunct for the exploration and neuronavigation of intracranial pathologies especially if cerebrospinal fluid is involved. (orig.)

  12. Traditional vectors as an introduction to geometric algebra

    International Nuclear Information System (INIS)

    Carroll, J E

    2003-01-01

    The 2002 Oersted Medal Lecture by David Hestenes concerns the many advantages for education in physics if geometric algebra were to replace standard vector algebra. However, such a change has difficulties for those who have been taught traditionally. A new way of introducing geometric algebra is presented here using a four-element array composed of traditional vector and scalar products. This leads to an explicit 4 x 4 matrix representation which contains key requirements for three-dimensional geometric algebra. The work can be extended to include Maxwell's equations where it is found that curl and divergence appear naturally together. However, to obtain an explicit representation of space-time algebra with the correct behaviour under Lorentz transformations, an 8 x 8 matrix representation has to be formed. This leads to a Dirac representation of Maxwell's equations showing that space-time algebra has hidden within its formalism the symmetry of 'parity, charge conjugation and time reversal'

  13. Vectorization, parallelization and porting of nuclear codes. 2001

    International Nuclear Information System (INIS)

    Akiyama, Mitsunaga; Katakura, Fumishige; Kume, Etsuo; Nemoto, Toshiyuki; Tsuruoka, Takuya; Adachi, Masaaki

    2003-07-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the super computer system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 10 codes in fiscal 2001. In this report, the parallelization of Neutron Radiography for 3 Dimensional CT code NR3DCT, the vectorization of unsteady-state heat conduction code THERMO3D, the porting of initial program of MHD simulation, the tuning of Heat And Mass Balance Analysis Code HAMBAC, the porting and parallelization of Monte Carlo N-Particle transport code MCNP4C3, the porting and parallelization of Monte Carlo N-Particle transport code system MCNPX2.1.5, the porting of induced activity calculation code CINAC-V4, the use of VisLink library in multidimensional two-fluid model code ACD3D and the porting of experiment data processing code from GS8500 to SR8000 are described. (author)

  14. High-efficiency and flexible generation of vector vortex optical fields by a reflective phase-only spatial light modulator.

    Science.gov (United States)

    Cai, Meng-Qiang; Wang, Zhou-Xiang; Liang, Juan; Wang, Yan-Kun; Gao, Xu-Zhen; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

    2017-08-01

    The scheme for generating vector optical fields should have not only high efficiency but also flexibility for satisfying the requirements of various applications. However, in general, high efficiency and flexibility are not compatible. Here we present and experimentally demonstrate a solution to directly, flexibly, and efficiently generate vector vortex optical fields (VVOFs) with a reflective phase-only liquid crystal spatial light modulator (LC-SLM) based on optical birefringence of liquid crystal molecules. To generate the VVOFs, this approach needs in principle only a half-wave plate, an LC-SLM, and a quarter-wave plate. This approach has some advantages, including a simple experimental setup, good flexibility, and high efficiency, making the approach very promising in some applications when higher power is need. This approach has a generation efficiency of 44.0%, which is much higher than the 1.1% of the common path interferometric approach.

  15. Managing the resilience space of the German energy system - A vector analysis.

    Science.gov (United States)

    Schlör, Holger; Venghaus, Sandra; Märker, Carolin; Hake, Jürgen-Friedrich

    2018-07-15

    The UN Sustainable Development Goals formulated in 2016 confirmed the sustainability concept of the Earth Summit of 1992 and supported UNEP's green economy transition concept. The transformation of the energy system (Energiewende) is the keystone of Germany's sustainability strategy and of the German green economy concept. We use ten updated energy-related indicators of the German sustainability strategy to analyse the German energy system. The development of the sustainable indicators is examined in the monitoring process by a vector analysis performed in two-dimensional Euclidean space (Euclidean plane). The aim of the novel vector analysis is to measure the current status of the Energiewende in Germany and thereby provide decision makers with information about the strains for the specific remaining pathway of the single indicators and of the total system in order to meet the sustainability targets of the Energiewende. Within this vector model, three vectors (the normative sustainable development vector, the real development vector, and the green economy vector) define the resilience space of our analysis. The resilience space encloses a number of vectors representing different pathways with different technological and socio-economic strains to achieve a sustainable development of the green economy. In this space, the decision will be made as to whether the government measures will lead to a resilient energy system or whether a readjustment of indicator targets or political measures is necessary. The vector analysis enables us to analyse both the government's ambitiousness, which is expressed in the sustainability target for the indicators at the start of the sustainability strategy representing the starting preference order of the German government (SPO) and, secondly, the current preference order of German society in order to bridge the remaining distance to reach the specific sustainability goals of the strategy summarized in the current preference order (CPO

  16. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    , if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges...... ultrasonic vector flow estimation and bring it a step closer to a clinical application. A method for high frame rate 3-D vector flow estimation in a plane using the transverse oscillation method combined with a 1024 channel 2-D matrix array is presented. The proposed method is validated both through phantom...... hampers the task of real-time processing. In a second study, some of the issue with the 2-D matrix array are solved by introducing a 2-D row-column (RC) addressing array with only 62 + 62 elements. It is investigated both through simulations and via experimental setups in various flow conditions...

  17. Rare Hadronic B Decays to Vector, Axial-Vector and Tensors

    International Nuclear Information System (INIS)

    Gao, Y.Y.

    2011-01-01

    The authors review BABAR measurements of several rare B decays, including vector-axial-vector decays B ± → φK 1 ± (1270), B ± → φ K 1 ± (1400) and B ± → b 1 # -+ρ# ± , vector-vector decays B ± → φK* ± (1410), B 0 → K* 0 (bar K)* 0 , B 0 → K*0K*0 and B 0 → K*+K*-, vector-tensor decays B ± → φK* 2 (1430) ± and φK 2 (1770)/ ± (1820), and vector-scalar decays B ± → φK* 0 (1430) ± . Understanding the observed polarization pattern requires amplitude contributions from an uncertain source.

  18. Three-dimensional polarization algebra.

    Science.gov (United States)

    R Sheppard, Colin J; Castello, Marco; Diaspro, Alberto

    2016-10-01

    If light is focused or collected with a high numerical aperture lens, as may occur in imaging and optical encryption applications, polarization should be considered in three dimensions (3D). The matrix algebra of polarization behavior in 3D is discussed. It is useful to convert between the Mueller matrix and two different Hermitian matrices, representing an optical material or system, which are in the literature. Explicit transformation matrices for converting the column vector form of these different matrices are extended to the 3D case, where they are large (81×81) but can be generated using simple rules. It is found that there is some advantage in using a generalization of the Chandrasekhar phase matrix treatment, rather than that based on Gell-Mann matrices, as the resultant matrices are of simpler form and reduce to the two-dimensional case more easily. Explicit expressions are given for 3D complex field components in terms of Chandrasekhar-Stokes parameters.

  19. Three-dimensional effects of curved plasma actuators in quiescent air

    International Nuclear Information System (INIS)

    Wang Chincheng; Durscher, Ryan; Roy, Subrata

    2011-01-01

    This paper presents results on a new class of curved plasma actuators for the inducement of three-dimensional vortical structures. The nature of the fluid flow inducement on a flat plate, in quiescent conditions, due to four different shapes of dielectric barrier discharge (DBD) plasma actuators is numerically investigated. The three-dimensional plasma kinetic equations are solved using our in-house, finite element based, multiscale ionized gas (MIG) flow code. Numerical results show electron temperature and three dimensional plasma force vectors for four shapes, which include linear, triangular, serpentine, and square actuators. Three-dimensional effects such as pinching and spreading the neighboring fluid are observed for serpentine and square actuators. The mechanisms of vorticity generation for DBD actuators are discussed. Also the influence of geometric wavelength (λ) and amplitude (Λ) of the serpentine and square actuators on vectored thrust inducement is predicted. This results in these actuators producing significantly better flow mixing downstream as compared to the standard linear actuator. Increasing the wavelengths of serpentine and square actuators in the spanwise direction is shown to enhance the pinching effect giving a much higher vertical velocity. On the contrary, changing the amplitude of the curved actuator varies the streamwise velocity significantly influencing the near wall jet. Experimental data for a serpentine actuator are also reported for validation purpose.

  20. Charge density wave properties of the quasi two-dimensional purple molybdenum bronze KMo 6O 17

    Science.gov (United States)

    Balaska, H.; Dumas, J.; Guyot, H.; Mallet, P.; Marcus, J.; Schlenker, C.; Veuillen, J. Y.; Vignolles, D.

    2005-06-01

    The purple molybdenum bronze KMo 6O 17 is a quasi-two-dimensional compound which shows a Peierls transition towards a commensurate metallic CDW state. Electron spectroscopy (ARUPS), Scanning Tunnelling Microscopy (STM) and spectroscopy (STS) as well as high magnetic field studies are reported. ARUPS studies corroborate the model of the hidden nesting and provide a value of the CDW vector in good agreement with other measurements. STM studies visualize the triple- q CDW in real space. This is consistent with other measurements of the CDW vector. STS studies provide a value of several 10 meV for the average CDW gap. High magnetic field measurements performed in pulsed fields up to 55 T establish that first order transitions to smaller gap states take place at low temperature. These transitions are ascribed to Pauli type coupling. A phase diagram summarizing all observed anomalies and transitions is presented.

  1. Global communication schemes for the numerical solution of high-dimensional PDEs

    DEFF Research Database (Denmark)

    Hupp, Philipp; Heene, Mario; Jacob, Riko

    2016-01-01

    The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...

  2. Heterologous prime-boost immunization of Newcastle disease virus vectored vaccines protected broiler chickens against highly pathogenic avian influenza and Newcastle disease viruses.

    Science.gov (United States)

    Kim, Shin-Hee; Samal, Siba K

    2017-07-24

    Avian Influenza virus (AIV) is an important pathogen for both human and animal health. There is a great need to develop a safe and effective vaccine for AI infections in the field. Live-attenuated Newcastle disease virus (NDV) vectored AI vaccines have shown to be effective, but preexisting antibodies to the vaccine vector can affect the protective efficacy of the vaccine in the field. To improve the efficacy of AI vaccine, we generated a novel vectored vaccine by using a chimeric NDV vector that is serologically distant from NDV. In this study, the protective efficacy of our vaccines was evaluated by using H5N1 highly pathogenic avian influenza virus (HPAIV) strain A/Vietnam/1203/2004, a prototype strain for vaccine development. The vaccine viruses were three chimeric NDVs expressing the hemagglutinin (HA) protein in combination with the neuraminidase (NA) protein, matrix 1 protein, or nonstructural 1 protein. Comparison of their protective efficacy between a single and prime-boost immunizations indicated that prime immunization of 1-day-old SPF chicks with our vaccine viruses followed by boosting with the conventional NDV vector strain LaSota expressing the HA protein provided complete protection of chickens against mortality, clinical signs and virus shedding. Further verification of our heterologous prime-boost immunization using commercial broiler chickens suggested that a sequential immunization of chickens with chimeric NDV vector expressing the HA and NA proteins following the boost with NDV vector expressing the HA protein can be a promising strategy for the field vaccination against HPAIVs and against highly virulent NDVs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. An improved ternary vector system for Agrobacterium-mediated rapid maize transformation.

    Science.gov (United States)

    Anand, Ajith; Bass, Steven H; Wu, Emily; Wang, Ning; McBride, Kevin E; Annaluru, Narayana; Miller, Michael; Hua, Mo; Jones, Todd J

    2018-05-01

    A simple and versatile ternary vector system that utilizes improved accessory plasmids for rapid maize transformation is described. This system facilitates high-throughput vector construction and plant transformation. The super binary plasmid pSB1 is a mainstay of maize transformation. However, the large size of the base vector makes it challenging to clone, the process of co-integration is cumbersome and inefficient, and some Agrobacterium strains are known to give rise to spontaneous mutants resistant to tetracycline. These limitations present substantial barriers to high throughput vector construction. Here we describe a smaller, simpler and versatile ternary vector system for maize transformation that utilizes improved accessory plasmids requiring no co-integration step. In addition, the newly described accessory plasmids have restored virulence genes found to be defective in pSB1, as well as added virulence genes. Testing of different configurations of the accessory plasmids in combination with T-DNA binary vector as ternary vectors nearly doubles both the raw transformation frequency and the number of transformation events of usable quality in difficult-to-transform maize inbreds. The newly described ternary vectors enabled the development of a rapid maize transformation method for elite inbreds. This vector system facilitated screening different origins of replication on the accessory plasmid and T-DNA vector, and four combinations were identified that have high (86-103%) raw transformation frequency in an elite maize inbred.

  4. Construction of high-dimensional universal quantum logic gates using a Λ system coupled with a whispering-gallery-mode microresonator.

    Science.gov (United States)

    He, Ling Yan; Wang, Tie-Jun; Wang, Chuan

    2016-07-11

    High-dimensional quantum system provides a higher capacity of quantum channel, which exhibits potential applications in quantum information processing. However, high-dimensional universal quantum logic gates is difficult to achieve directly with only high-dimensional interaction between two quantum systems and requires a large number of two-dimensional gates to build even a small high-dimensional quantum circuits. In this paper, we propose a scheme to implement a general controlled-flip (CF) gate where the high-dimensional single photon serve as the target qudit and stationary qubits work as the control logic qudit, by employing a three-level Λ-type system coupled with a whispering-gallery-mode microresonator. In our scheme, the required number of interaction times between the photon and solid state system reduce greatly compared with the traditional method which decomposes the high-dimensional Hilbert space into 2-dimensional quantum space, and it is on a shorter temporal scale for the experimental realization. Moreover, we discuss the performance and feasibility of our hybrid CF gate, concluding that it can be easily extended to a 2n-dimensional case and it is feasible with current technology.

  5. Conservation laws and two-dimensional black holes in dilaton gravity

    Science.gov (United States)

    Mann, R. B.

    1993-05-01

    A very general class of Lagrangians which couple scalar fields to gravitation and matter in two spacetime dimensions is investigated. It is shown that a vector field exists along whose flow lines the stress-energy tensor is conserved, regardless of whether or not the equations of motion are satisfied or if any Killing vectors exist. Conditions necessary for the existence of Killing vectors are derived. A new set of two-dimensional (2D) black-hole solutions is obtained for one particular member within this class of Lagrangians, which couples a Liouville field to 2D gravity in a novel way. One solution of this theory bears an interesting resemblance to the 2D string-theoretic black hole, yet contains markedly different thermodynamic properties.

  6. Hawking radiation of spin-1 particles from a three-dimensional rotating hairy black hole

    Energy Technology Data Exchange (ETDEWEB)

    Sakalli, I.; Ovgun, A., E-mail: ali.ovgun@emu.edu.tr [Eastern Mediterranean University Famagusta, North Cyprus, Department of Physics (Turkey)

    2015-09-15

    We study the Hawking radiation of spin-1 particles (so-called vector particles) from a three-dimensional rotating black hole with scalar hair using a Hamilton–Jacobi ansatz. Using the Proca equation in the WKB approximation, we obtain the tunneling spectrum of vector particles. We recover the standard Hawking temperature corresponding to the emission of these particles from a rotating black hole with scalar hair.

  7. A three-dimensional kinematic model for the dissolution of crystals

    Science.gov (United States)

    Tellier, C. R.

    1989-06-01

    The two-dimensional kinematic theory developed by Frank is extended into three dimensions. It is shown that the theoretical equations for the propagation vector associated with the displacement of a moving surface element can be directly derived from the polar equation of the slowness surface.

  8. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    Science.gov (United States)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  9. Micromechanics of Composite Materials Governed by Vector Constitutive Laws

    Science.gov (United States)

    Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.

    2017-01-01

    The high-fidelity generalized method of cells micromechanics theory has been extended for the prediction of the effective property tensor and the corresponding local field distributions for composites whose constituents are governed by vector constitutive laws. As shown, the shear analogy, which can predict effective transverse properties, is not valid in the general three-dimensional case. Consequently, a general derivation is presented that is applicable to both continuously and discontinuously reinforced composites with arbitrary vector constitutive laws and periodic microstructures. Results are given for thermal and electric problems, effective properties and local field distributions, ordered and random microstructures, as well as complex geometries including woven composites. Comparisons of the theory's predictions are made to test data, numerical analysis, and classical expressions from the literature. Further, classical methods cannot provide the local field distributions in the composite, and it is demonstrated that, as the percolation threshold is approached, their predictions are increasingly unreliable. XXXX It has been observed that the bonding between the fibers and matrix in composite materials can be imperfect. In the context of thermal conductivity, such imperfect interfaces have been investigated in micromechanical models by Dunn and Taya (1993), Duan and Karihaloo (2007), Nan et al. (1997) and Hashin (2001). The present HFGMC micromechanical method, derived for perfectly bonded composite materials governed by vector constitutive laws, can be easily generalized to include the effects of weak bonding between the constituents. Such generalizations, in the context of the mechanical micromechanics problem, involve introduction of a traction-separation law at the fiber/matrix interface and have been presented by Aboudi (1987), Bednarcyk and Arnold (2002), Bednarcyk et al. (2004) and Aboudi et al. (2013) and will be addressed in the future.

  10. Vector analysis

    CERN Document Server

    Brand, Louis

    2006-01-01

    The use of vectors not only simplifies treatments of differential geometry, mechanics, hydrodynamics, and electrodynamics, but also makes mathematical and physical concepts more tangible and easy to grasp. This text for undergraduates was designed as a short introductory course to give students the tools of vector algebra and calculus, as well as a brief glimpse into these subjects' manifold applications. The applications are developed to the extent that the uses of the potential function, both scalar and vector, are fully illustrated. Moreover, the basic postulates of vector analysis are brou

  11. Detection of surface cracking in steel pipes based on vibration data using a multi-class support vector machine classifier

    Science.gov (United States)

    Mustapha, S.; Braytee, A.; Ye, L.

    2017-04-01

    In this study, we focused at the development and verification of a robust framework for surface crack detection in steel pipes using measured vibration responses; with the presence of multiple progressive damage occurring in different locations within the structure. Feature selection, dimensionality reduction, and multi-class support vector machine were established for this purpose. Nine damage cases, at different locations, orientations and length, were introduced into the pipe structure. The pipe was impacted 300 times using an impact hammer, after each damage case, the vibration data were collected using 3 PZT wafers which were installed on the outer surface of the pipe. At first, damage sensitive features were extracted using the frequency response function approach followed by recursive feature elimination for dimensionality reduction. Then, a multi-class support vector machine learning algorithm was employed to train the data and generate a statistical model. Once the model is established, decision values and distances from the hyper-plane were generated for the new collected data using the trained model. This process was repeated on the data collected from each sensor. Overall, using a single sensor for training and testing led to a very high accuracy reaching 98% in the assessment of the 9 damage cases used in this study.

  12. Simplified lentivirus vector production in protein-free media using polyethylenimine-mediated transfection.

    Science.gov (United States)

    Kuroda, Hitoshi; Kutner, Robert H; Bazan, Nicolas G; Reiser, Jakob

    2009-05-01

    During the past 12 years, lentiviral vectors have emerged as valuable tools for transgene delivery because of their ability to transduce nondividing cells and their capacity to sustain long-term transgene expression. Despite significant progress, the production of high-titer high-quality lentiviral vectors is cumbersome and costly. The most commonly used method to produce lentiviral vectors involves transient transfection using calcium phosphate (CaP)-mediated precipitation of plasmid DNAs. However, inconsistencies in pH can cause significant batch-to-batch variations in lentiviral vector titers, making this method unreliable. This study describes optimized protocols for lentiviral vector production based on polyethylenimine (PEI)-mediated transfection, resulting in more consistent lentiviral vector stocks. To achieve this goal, simple production methods for high-titer lentiviral vector production involving transfection of HEK 293T cells immediately after plating were developed. Importantly, high titers were obtained with cell culture media lacking serum or other protein additives altogether. As a consequence, large-scale lentiviral vector stocks can now be generated with fewer batch-to-batch variations and at reduced costs and with less labor compared to the standard protocols.

  13. Flight-Determined Subsonic Longitudinal Stability and Control Derivatives of the F-18 High Angle of Attack Research Vehicle (HARV) with Thrust Vectoring

    Science.gov (United States)

    Iliff, Kenneth W.; Wang, Kon-Sheng Charles

    1997-01-01

    The subsonic longitudinal stability and control derivatives of the F-18 High Angle of Attack Research Vehicle (HARV) are extracted from dynamic flight data using a maximum likelihood parameter identification technique. The technique uses the linearized aircraft equations of motion in their continuous/discrete form and accounts for state and measurement noise as well as thrust-vectoring effects. State noise is used to model the uncommanded forcing function caused by unsteady aerodynamics over the aircraft, particularly at high angles of attack. Thrust vectoring was implemented using electrohydraulically-actuated nozzle postexit vanes and a specialized research flight control system. During maneuvers, a control system feature provided independent aerodynamic control surface inputs and independent thrust-vectoring vane inputs, thereby eliminating correlations between the aircraft states and controls. Substantial variations in control excitation and dynamic response were exhibited for maneuvers conducted at different angles of attack. Opposing vane interactions caused most thrust-vectoring inputs to experience some exhaust plume interference and thus reduced effectiveness. The estimated stability and control derivatives are plotted, and a discussion relates them to predicted values and maneuver quality.

  14. Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement

    Directory of Open Access Journals (Sweden)

    Xianglei Liu

    2018-01-01

    Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.

  15. Centre-of-mass frames in six-dimensional special relativity

    International Nuclear Information System (INIS)

    Cole, E.A.B.

    1980-01-01

    Centre-of-mass frames are defined in six-dimensional special relativity. In particular, these frames are studied for various pairs of particles which can be any combination of bradyons, luxons and tachyons. These frames can be subluminal, superluminal or non-existent, depending on the angle between the particle time vectors. (author)

  16. Secure data storage by three-dimensional absorbers in highly scattering volume medium

    International Nuclear Information System (INIS)

    Matoba, Osamu; Matsuki, Shinichiro; Nitta, Kouichi

    2008-01-01

    A novel data storage in a volume medium with highly scattering coefficient is proposed for data security application. Three-dimensional absorbers are used as data. These absorbers can not be measured by interferometer when the scattering in a volume medium is strong enough. We present a method to reconstruct three-dimensional absorbers and present numerical results to show the effectiveness of the proposed data storage.

  17. Anomalous couplings, resonances and unitarity in vector boson scattering

    Energy Technology Data Exchange (ETDEWEB)

    Sekulla, Marco

    2015-12-04

    The Standard Model of particle physics has proved itself as a reliable theory to describe interactions of elementary particles. However, many questions concerning the Higgs sector and the associated electroweak symmetry breaking are still open, even after (or because) a light Higgs boson has been discovered. The 2→2 scattering amplitude of weak vector bosons is suppressed in the Standard Model due to the Higgs boson exchange. Therefore, weak vector boson scattering processes are very sensitive to additional contributions beyond the Standard Model. Possible new physics deviations can be studied model-independently by higher dimensional operators within the effective field theory framework. In this thesis, a complete set of dimension six and eight operators are discussed for vector boson scattering processes. Assuming a scenario where new physics in the Higgs/Goldstone boson decouples from the fermion-sector and the gauge-sector in the high energy limit, the impact of the dimension six operator L{sub HD} and dimension eight operators L{sub S,0} and L{sub S,1} to vector boson scattering processes can be studied separately for complete processes at particle colliders. However, a conventional effective field theory analysis will violate the S-matrix unitarity above a certain energy limit. The direct T-matrix scheme is developed to allow a study of effective field theory operators consistent with basic quantum-mechanical principles in the complete energy reach of current and future colliders. Additionally, this scheme can be used preventively for any model, because it leaves theoretical predictions invariant, which already satisfies unitarity. The effective field theory approach is further extended by allowing additional generic resonances coupling to the Higgs/Goldstone boson sector, namely the isoscalar-scalar, isoscalar-tensor, isotensor-scalar and isotensor-tensor. In particular, the Stueckelberg formalism is used to investigate the impact of the tensor degree of

  18. Relativistic band gaps in one-dimensional disordered systems

    International Nuclear Information System (INIS)

    Clerk, G.J.; McKellar, B.H.J.

    1992-01-01

    Conditions for the existence of band gaps in a one-dimensional disordered array of δ-function potentials possessing short range order are developed in a relativistic framework. Both Lorentz vector and scalar type potentials are treated. The relationship between the energy gaps and the transmission properties of the array are also discussed. 20 refs., 2 figs

  19. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    Energy Technology Data Exchange (ETDEWEB)

    Koulouri, Alexandra, E-mail: koulouri@uni-muenster.de [Institute for Computational and Applied Mathematics, University of Münster, Einsteinstrasse 62, D-48149 Münster (Germany); Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT (United Kingdom); Brookes, Mike [Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT (United Kingdom); Rimpiläinen, Ville [Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, D-48149 Münster (Germany); Department of Mathematics, University of Auckland, Private bag 92019, Auckland 1142 (New Zealand)

    2017-01-15

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field. - Highlights: • Vector tomography is used to reconstruct electric fields generated by dipole

  20. Vector model for polarized second-harmonic generation microscopy under high numerical aperture

    International Nuclear Information System (INIS)

    Wang, Xiang-Hui; Chang, Sheng-Jiang; Lin, Lie; Wang, Lin-Rui; Huo, Bing-Zhong; Hao, Shu-Jian

    2010-01-01

    Based on the vector diffraction theory and the generalized Jones matrix formalism, a vector model for polarized second-harmonic generation (SHG) microscopy is developed, which includes the roles of the axial component P z , the weight factor and the cross-effect between the lateral components. The numerical results show that as the relative magnitude of P z increases, the polarization response of the second-harmonic signal will vary from linear polarization to elliptical polarization and the polarization orientation of the second-harmonic signal is different from that under the paraxial approximation. In addition, it is interesting that the polarization response of the detected second-harmonic signal can change with the value of the collimator lens NA. Therefore, it is more advantageous to adopt the vector model to investigate the property of polarized SHG microscopy for a variety of cases

  1. Auto-tuning Dense Vector and Matrix-vector Operations for Fermi GPUs

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    applications. As examples, we develop single-precision CUDA kernels for the Euclidian norm (SNRM2) and the matrix-vector multiplication (SGEMV). The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture). We show that auto-tuning can be successfully applied to achieve high performance...

  2. Burgers Vector Analysis of Vertical Dislocations in Ge Crystals by Large-Angle Convergent Beam Electron Diffraction.

    Science.gov (United States)

    Groiss, Heiko; Glaser, Martin; Marzegalli, Anna; Isa, Fabio; Isella, Giovanni; Miglio, Leo; Schäffler, Friedrich

    2015-06-01

    By transmission electron microscopy with extended Burgers vector analyses, we demonstrate the edge and screw character of vertical dislocations (VDs) in novel SiGe heterostructures. The investigated pillar-shaped Ge epilayers on prepatterned Si(001) substrates are an attempt to avoid the high defect densities of lattice mismatched heteroepitaxy. The Ge pillars are almost completely strain-relaxed and essentially defect-free, except for the rather unexpected VDs. We investigated both pillar-shaped and unstructured Ge epilayers grown either by molecular beam epitaxy or by chemical vapor deposition to derive a general picture of the underlying dislocation mechanisms. For the Burgers vector analysis we used a combination of dark field imaging and large-angle convergent beam electron diffraction (LACBED). With LACBED simulations we identify ideally suited zeroth and second order Laue zone Bragg lines for an unambiguous determination of the three-dimensional Burgers vectors. By analyzing dislocation reactions we confirm the origin of the observed types of VDs, which can be efficiently distinguished by LACBED. The screw type VDs are formed by a reaction of perfect 60° dislocations, whereas the edge types are sessile dislocations that can be formed by cross-slips and climbing processes. The understanding of these origins allows us to suggest strategies to avoid VDs.

  3. A report on the study of algorithms to enhance Vector computer performance for the discretized one-dimensional time-dependent heat conduction equation: EPIC research, Phase 1

    International Nuclear Information System (INIS)

    Majumdar, A.; Makowitz, H.

    1987-10-01

    With the development of modern vector/parallel supercomputers and their lower performance clones it has become possible to increase computational performance by several orders of magnitude when comparing to the previous generation of scalar computers. These performance gains are not observed when production versions of current thermal-hydraulic codes are implemented on modern supercomputers. It is our belief that this is due in part to the inappropriateness of using old thermal-hydraulic algorithms with these new computer architectures. We believe that a new generation of algorithms needs to be developed for thermal-hydraulics simulation that is optimized for vector/parallel architectures, and not the scalar computers of the previous generation. We have begun a study that will investigate several approaches for designing such optimal algorithms. These approaches are based on the following concepts: minimize recursion; utilize predictor-corrector iterative methods; maximize the convergence rate of iterative methods used; use physical approximations as well as numerical means to accelerate convergence; utilize explicit methods (i.e., marching) where stability will permit. We call this approach the ''EPIC'' methodology (i.e., Explicit Predictor Iterative Corrector methods). Utilizing the above ideas, we have begun our work by investigating the one-dimensional transient heat conduction equation. We have developed several algorithms based on variations of the Hopscotch concept, which we discuss in the body of this report. 14 refs

  4. Measurement of Charmless B to Vector-Vector decays at BaBar

    International Nuclear Information System (INIS)

    Olaiya, Emmanuel

    2011-01-01

    The authors present results of B → vector-vector (VV) and B → vector-axial vector (VA) decays B 0 → φX(X = φ,ρ + or ρ 0 ), B + → φK (*)+ , B 0 → K*K*, B 0 → ρ + b 1 - and B + → K* 0 α 1 + . The largest dataset used for these results is based on 465 x 10 6 Υ(4S) → B(bar B) decays, collected with the BABAR detector at the PEP-II B meson factory located at the Stanford Linear Accelerator Center (SLAC). Using larger datasets, the BABAR experiment has provided more precise B → VV measurements, further supporting the smaller than expected longitudinal polarization fraction of B → φK*. Additional B meson to vector-vector and vector-axial vector decays have also been studied with a view to shedding light on the polarization anomaly. Taking into account the available errors, we find no disagreement between theory and experiment for these additional decays.

  5. Vector-Parallel processing of the successive overrelaxation method

    International Nuclear Information System (INIS)

    Yokokawa, Mitsuo

    1988-02-01

    Successive overrelaxation method, called SOR method, is one of iterative methods for solving linear system of equations, and it has been calculated in serial with a natural ordering in many nuclear codes. After the appearance of vector processors, this natural SOR method has been changed for the parallel algorithm such as hyperplane or red-black method, in which the calculation order is modified. These methods are suitable for vector processors, and more high-speed calculation can be obtained compared with the natural SOR method on vector processors. In this report, a new scheme named 4-colors SOR method is proposed. We find that the 4-colors SOR method can be executed on vector-parallel processors and it gives the most high-speed calculation among all SOR methods according to results of the vector-parallel execution on the Alliant FX/8 multiprocessor system. It is also shown that the theoretical optimal acceleration parameters are equal among five different ordering SOR methods, and the difference between convergence rates of these SOR methods are examined. (author)

  6. High-dimensional quantum cryptography with twisted light

    International Nuclear Information System (INIS)

    Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J

    2015-01-01

    Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)

  7. Vector mesons and chiral symmetry

    International Nuclear Information System (INIS)

    Ecker, G.

    1989-01-01

    The ambiguities in the off-shell behaviour of spin-1 exchange can be resolved to O(p 4 ) in the chiral low-energy expansion if the asymptotic behaviour of QCD is properly incorporated. As a consequence, the chiral version of vector (and axial-vector) meson dominance is model independent. Additional high-energy constraints motivated by QCD determine the V,A resonance couplings uniquely. In particular, QCD in its effective chiral realization sucessfully predicts Γ(ρ→2π). 10 refs. (Author)

  8. Use of a mixture statistical model in studying malaria vectors density.

    Directory of Open Access Journals (Sweden)

    Olayidé Boussari

    Full Text Available Vector control is a major step in the process of malaria control and elimination. This requires vector counts and appropriate statistical analyses of these counts. However, vector counts are often overdispersed. A non-parametric mixture of Poisson model (NPMP is proposed to allow for overdispersion and better describe vector distribution. Mosquito collections using the Human Landing Catches as well as collection of environmental and climatic data were carried out from January to December 2009 in 28 villages in Southern Benin. A NPMP regression model with "village" as random effect is used to test statistical correlations between malaria vectors density and environmental and climatic factors. Furthermore, the villages were ranked using the latent classes derived from the NPMP model. Based on this classification of the villages, the impacts of four vector control strategies implemented in the villages were compared. Vector counts were highly variable and overdispersed with important proportion of zeros (75%. The NPMP model had a good aptitude to predict the observed values and showed that: i proximity to freshwater body, market gardening, and high levels of rain were associated with high vector density; ii water conveyance, cattle breeding, vegetation index were associated with low vector density. The 28 villages could then be ranked according to the mean vector number as estimated by the random part of the model after adjustment on all covariates. The NPMP model made it possible to describe the distribution of the vector across the study area. The villages were ranked according to the mean vector density after taking into account the most important covariates. This study demonstrates the necessity and possibility of adapting methods of vector counting and sampling to each setting.

  9. Two-dimensional PCA-based human gait identification

    Science.gov (United States)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  10. A Near-linear Time Approximation Algorithm for Angle-based Outlier Detection in High-dimensional Data

    DEFF Research Database (Denmark)

    Pham, Ninh Dang; Pagh, Rasmus

    2012-01-01

    projection-based technique that is able to estimate the angle-based outlier factor for all data points in time near-linear in the size of the data. Also, our approach is suitable to be performed in parallel environment to achieve a parallel speedup. We introduce a theoretical analysis of the quality...... neighbor are deteriorated in high-dimensional data. Following up on the work of Kriegel et al. (KDD '08), we investigate the use of angle-based outlier factor in mining high-dimensional outliers. While their algorithm runs in cubic time (with a quadratic time heuristic), we propose a novel random......Outlier mining in d-dimensional point sets is a fundamental and well studied data mining task due to its variety of applications. Most such applications arise in high-dimensional domains. A bottleneck of existing approaches is that implicit or explicit assessments on concepts of distance or nearest...

  11. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  12. Techniques for vector analyzing power measurements of the 2H(n vector,np)n breakup reaction at low energies

    International Nuclear Information System (INIS)

    Howell, C.R.; Tornow, W.; Pfuetzner, H.G.; Li Anli; Roberts, M.L.; Murphy, K.; Felsher, P.D.; Weisel, G.J.; Naqvi, A.; Walter, R.L.; Lambert, J.M.; Treado, P.A.

    1990-01-01

    Experimental methods to measure the vector analyzing powers over a broad range of kinematic configurations in the n-d breakup reaction have been developed at TUNL. These techniques employ the polarized beam facilities at TUNL and use the 2 H(d vector, n vector) 3 He reaction as a source of low-energy polarized neutrons. Our methods permit measurements to a high statistical accuracy over a large fraction of three-nucleon phase space. The techniques are described and experimental spectra along with kinematic calculations are presented. (orig.)

  13. Nonparaxial and paraxial focusing of azimuthal-variant vector beams.

    Science.gov (United States)

    Gu, Bing; Cui, Yiping

    2012-07-30

    Based on the vectorial Rayleigh-Sommerfeld formulas under the weak nonparaxial approximation, we investigate the propagation behavior of a lowest-order Laguerre-Gaussian beam with azimuthal-variant states of polarization. We present the analytical expressions for the radial, azimuthal, and longitudinal components of the electric field with an arbitrary integer topological charge m focused by a nonaperturing thin lens. We illustrate the three-dimensional optical intensities, energy flux distributions, beam waists, and focal shifts of the focused azimuthal-variant vector beams under the nonparaxial and paraxial approximations.

  14. The vector and parallel processing of MORSE code on Monte Carlo Machine

    International Nuclear Information System (INIS)

    Hasegawa, Yukihiro; Higuchi, Kenji.

    1995-11-01

    Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)

  15. Intertwined Hamiltonians in two-dimensional curved spaces

    International Nuclear Information System (INIS)

    Aghababaei Samani, Keivan; Zarei, Mina

    2005-01-01

    The problem of intertwined Hamiltonians in two-dimensional curved spaces is investigated. Explicit results are obtained for Euclidean plane, Minkowski plane, Poincare half plane (AdS 2 ), de Sitter plane (dS 2 ), sphere, and torus. It is shown that the intertwining operator is related to the Killing vector fields and the isometry group of corresponding space. It is shown that the intertwined potentials are closely connected to the integral curves of the Killing vector fields. Two problems are considered as applications of the formalism presented in the paper. The first one is the problem of Hamiltonians with equispaced energy levels and the second one is the problem of Hamiltonians whose spectrum is like the spectrum of a free particle

  16. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    Science.gov (United States)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  17. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  18. Endothelial Cell-Targeted Adenoviral Vector for Suppressing Breast Malignancies

    National Research Council Canada - National Science Library

    Huang, Shuang

    2004-01-01

    .... Our proposal is designed to develop an endothelial cell-targeted adenoviral vector and to use the targeted vector to express high levels of anticancer therapeutic genes in the sites of angiogenenic...

  19. Vectorization of KENO IV code and an estimate of vector-parallel processing

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Higuchi, Kenji; Katakura, Jun-ichi; Kurita, Yutaka.

    1986-10-01

    The multi-group criticality safety code KENO IV has been vectorized and tested on FACOM VP-100 vector processor. At first the vectorized KENO IV on a scalar processor became slower than the original one by a factor of 1.4 because of the overhead introduced by the vectorization. Making modifications of algorithms and techniques for vectorization, the vectorized version has become faster than the original one by a factor of 1.4 and 3.0 on the vector processor for sample problems of complex and simple geometries, respectively. For further speedup of the code, some improvements on compiler and hardware, especially on addition of Monte Carlo pipelines to the vector processor, are discussed. Finally a pipelined parallel processor system is proposed and its performance is estimated. (author)

  20. Algebras of Complete Hörmander Vector Fields, and Lie-Group Construction

    Directory of Open Access Journals (Sweden)

    Andrea Bonfiglioli

    2014-12-01

    Full Text Available The aim of this note is to characterize the Lie algebras g of the analytic vector fields in RN which coincide with the Lie algebras of the (analytic Lie groups defined on RN (with its usual differentiable structure. We show that such a characterization amounts to asking that: (i g is N-dimensional; (ii g admits a set of Lie generators which are complete vector fields; (iii g satisfies Hörmander’s rank condition. These conditions are necessary, sufficient and mutually independent. Our approach is constructive, in that for any such g we show how to construct a Lie group G = (RN, * whose Lie algebra is g. We do not make use of Lie’s Third Theorem, but we only exploit the Campbell-Baker-Hausdorff-Dynkin Theorem for ODE’s.