Hoffman, Kenneth
2007-01-01
Developed for an introductory course in mathematical analysis at MIT, this text focuses on concepts, principles, and methods. Its introductions to real and complex analysis are closely formulated, and they constitute a natural introduction to complex function theory.Starting with an overview of the real number system, the text presents results for subsets and functions related to Euclidean space of n dimensions. It offers a rigorous review of the fundamentals of calculus, emphasizing power series expansions and introducing the theory of complex-analytic functions. Subsequent chapters cover seq
Variational submanifolds of Euclidean spaces
Krupka, D.; Urban, Z.; Volná, J.
2018-03-01
Systems of ordinary differential equations (or dynamical forms in Lagrangian mechanics), induced by embeddings of smooth fibered manifolds over one-dimensional basis, are considered in the class of variational equations. For a given non-variational system, conditions assuring variationality (the Helmholtz conditions) of the induced system with respect to a submanifold of a Euclidean space are studied, and the problem of existence of these "variational submanifolds" is formulated in general and solved for second-order systems. The variational sequence theory on sheaves of differential forms is employed as a main tool for the analysis of local and global aspects (variationality and variational triviality). The theory is illustrated by examples of holonomic constraints (submanifolds of a configuration Euclidean space) which are variational submanifolds in geometry and mechanics.
Random walks in Euclidean space
Varjú, Péter Pál
2012-01-01
Consider a sequence of independent random isometries of Euclidean space with a previously fixed probability law. Apply these isometries successively to the origin and consider the sequence of random points that we obtain this way. We prove a local limit theorem under a suitable moment condition and a necessary non-degeneracy condition. Under stronger hypothesis, we prove a limit theorem on a wide range of scales: between e^(-cl^(1/4)) and l^(1/2), where l is the number of steps.
Noncommutative products of Euclidean spaces
Dubois-Violette, Michel; Landi, Giovanni
2018-05-01
We present natural families of coordinate algebras on noncommutative products of Euclidean spaces R^{N_1} × _R R^{N_2} . These coordinate algebras are quadratic ones associated with an R -matrix which is involutive and satisfies the Yang-Baxter equations. As a consequence, they enjoy a list of nice properties, being regular of finite global dimension. Notably, we have eight-dimensional noncommutative euclidean spaces R4 × _R R4 . Among these, particularly well behaved ones have deformation parameter u \\in S^2 . Quotients include seven spheres S7_u as well as noncommutative quaternionic tori TH_u = S^3 × _u S^3 . There is invariance for an action of {{SU}}(2) × {{SU}}(2) on the torus TH_u in parallel with the action of U(1) × U(1) on a `complex' noncommutative torus T^2_θ which allows one to construct quaternionic toric noncommutative manifolds. Additional classes of solutions are disjoint from the classical case.
Ideas of space. Euclidean, non-Euclidean and relativistic
Energy Technology Data Exchange (ETDEWEB)
Gray, J
1979-01-01
An historical and chronological account of mathematics is presented in which familiarity with simple equations and elements of trigonometry is needed but no specialist knowledge is assumed although difficult problems are discussed. By discussion of the difficulties and confusions it is hoped to understand mathematics as a dynamic activity. Beginning with early Greek mathematics, the Eastern legacy and the transition to deductive and geometric thinking the problem of parallels is then encountered and discussed. The second part of the book takes the story from Wallis, Saccheri and Lambert through to its resolution by Gauss, Lobachevskii, Bolyai, Riemann and Bettrami. The background of the 19th century theory of surfaces is given. The third part gives an account of Einstein's theories based on what has gone before, moving from a Newtonian-Euclidean picture to an Einsteinian-nonEuclidean one. A brief account of gravitation, the nature of space and black holes concludes the book.
Fuzzy Euclidean wormholes in de Sitter space
Energy Technology Data Exchange (ETDEWEB)
Chen, Pisin; Hu, Yao-Chieh; Yeom, Dong-han, E-mail: pisinchen@phys.ntu.edu.tw, E-mail: r04244003@ntu.edu.tw, E-mail: innocent.yeom@gmail.com [Leung Center for Cosmology and Particle Astrophysics, National Taiwan University, Taipei 10617, Taiwan (China)
2017-07-01
We investigate Euclidean wormholes in Einstein gravity with a massless scalar field in de Sitter space. Euclidean wormholes are possible due to the analytic continuation of the time as well as complexification of fields, where we need to impose the classicality after the Wick-rotation to the Lorentzian signatures. For some parameters, wormholes are preferred than Hawking-Moss instantons, and hence wormholes can be more fundamental than Hawking-Moss type instantons. Euclidean wormholes can be interpreted in three ways: (1) classical big bounce, (2) either tunneling from a small to a large universe or a creation of a collapsing and an expanding universe from nothing, and (3) either a transition from a contracting to a bouncing phase or a creation of two expanding universes from nothing. These various interpretations shed some light on challenges of singularities. In addition, these will help to understand tensions between various kinds of quantum gravity theories.
Calculus and analysis in Euclidean space
Shurman, Jerry
2016-01-01
The graceful role of analysis in underpinning calculus is often lost to their separation in the curriculum. This book entwines the two subjects, providing a conceptual approach to multivariable calculus closely supported by the structure and reasoning of analysis. The setting is Euclidean space, with the material on differentiation culminating in the inverse and implicit function theorems, and the material on integration culminating in the general fundamental theorem of integral calculus. More in-depth than most calculus books but less technical than a typical analysis introduction, Calculus and Analysis in Euclidean Space offers a rich blend of content to students outside the traditional mathematics major, while also providing transitional preparation for those who will continue on in the subject. The writing in this book aims to convey the intent of ideas early in discussion. The narrative proceeds through figures, formulas, and text, guiding the reader to do mathematics resourcefully by marshaling the skil...
Bochner-Riesz means on Euclidean spaces
Lu, Shanzhen
2013-01-01
This book mainly deals with the Bochner-Riesz means of multiple Fourier integral and series on Euclidean spaces. It aims to give a systematical introduction to the fundamental theories of the Bochner-Riesz means and important achievements attained in the last 50 years. For the Bochner-Riesz means of multiple Fourier integral, it includes the Fefferman theorem which negates the Disc multiplier conjecture, the famous Carleson-Sjölin theorem, and Carbery-Rubio de Francia-Vega's work on almost everywhere convergence of the Bochner-Riesz means below the critical index. For the Bochner-Riesz means o
Uniform Page Migration Problem in Euclidean Space
Directory of Open Access Journals (Sweden)
Amanj Khorramian
2016-08-01
Full Text Available The page migration problem in Euclidean space is revisited. In this problem, online requests occur at any location to access a single page located at a server. Every request must be served, and the server has the choice to migrate from its current location to a new location in space. Each service costs the Euclidean distance between the server and request. A migration costs the distance between the former and the new server location, multiplied by the page size. We study the problem in the uniform model, in which the page has size D = 1 . All request locations are not known in advance; however, they are sequentially presented in an online fashion. We design a 2.75 -competitive online algorithm that improves the current best upper bound for the problem with the unit page size. We also provide a lower bound of 2.732 for our algorithm. It was already known that 2.5 is a lower bound for this problem.
International Nuclear Information System (INIS)
Saveliev, M.V.
1983-01-01
In the framework of the algebraic approach a construction of exactly integrable two-dimensional Riemannian manifolds embedded into enveloping Euclidean (pseudo-Euclidean) space Rsub(N) of an arbitrary dimension is presented. The construction is based on a reformulation of the Gauss, Peterson-Codazzi and Ricci equations in the form of a Lax-type representation in two-dimensional space. Here the Lax pair operators take the values in algebra SO(N)
Extended supersymmetry in four-dimensional Euclidean space
International Nuclear Information System (INIS)
McKeon, D.G.C.; Sherry, T.N.
2000-01-01
Since the generators of the two SU(2) groups which comprise SO(4) are not Hermitian conjugates of each other, the simplest supersymmetry algebra in four-dimensional Euclidean space more closely resembles the N=2 than the N=1 supersymmetry algebra in four-dimensional Minkowski space. An extended supersymmetry algebra in four-dimensional Euclidean space is considered in this paper; its structure resembles that of N=4 supersymmetry in four-dimensional Minkowski space. The relationship of this algebra to the algebra found by dimensionally reducing the N=1 supersymmetry algebra in ten-dimensional Euclidean space to four-dimensional Euclidean space is examined. The dimensional reduction of N=1 super Yang-Mills theory in ten-dimensional Minkowski space to four-dimensional Euclidean space is also considered
Founding Gravitation in 4D Euclidean Space-Time Geometry
International Nuclear Information System (INIS)
Winkler, Franz-Guenter
2010-01-01
The Euclidean interpretation of special relativity which has been suggested by the author is a formulation of special relativity in ordinary 4D Euclidean space-time geometry. The natural and geometrically intuitive generalization of this view involves variations of the speed of light (depending on location and direction) and a Euclidean principle of general covariance. In this article, a gravitation model by Jan Broekaert, which implements a view of relativity theory in the spirit of Lorentz and Poincare, is reconstructed and shown to fulfill the principles of the Euclidean approach after an appropriate reinterpretation.
On the invariant theory of Weingarten surfaces in Euclidean space
International Nuclear Information System (INIS)
Ganchev, Georgi; Mihova, Vesselka
2010-01-01
On any Weingarten surface in Euclidean space (strongly regular or rotational), we introduce locally geometric principal parameters and prove that such a surface is determined uniquely up to a motion by a special invariant function, which satisfies a natural nonlinear partial differential equation. This result can be interpreted as a solution to the Lund-Regge reduction problem for Weingarten surfaces in Euclidean space. We apply this theory to fractional-linear Weingarten surfaces and obtain the nonlinear partial differential equations describing them.
What does the Euclidean pseudoparticle do in Minkowski space
International Nuclear Information System (INIS)
Ju, I.
1978-08-01
Self dual pseudoparticle solutions for the classical Yang--Mills field equation with finite action have been constructed in Minkowski space. It is shown that the topological structures apparent in Euclidean space are no longer present in Minkowski space. Topological charges become fractional leading to the unquantized axial charge violation in the process involving fermions. 17 references
Optimal Embeddings of Distance Regular Graphs into Euclidean Spaces
F. Vallentin (Frank)
2008-01-01
htmlabstractIn this paper we give a lower bound for the least distortion embedding of a distance regular graph into Euclidean space. We use the lower bound for finding the least distortion for Hamming graphs, Johnson graphs, and all strongly regular graphs. Our technique involves semidefinite
International Nuclear Information System (INIS)
Catoni, Francesco; Cannata, Roberto; Zampetti, Paolo
2005-08-01
The Riemann and Lorentz constant curvature surfaces are investigated from an Euclidean point of view. The four surfaces (constant positive and constant negative curvatures with definite and non-definite fine elements) are represented as surfaces in a Riemannian or in a particular semi-Riemannian flat space and it is shown that the complex and the hyperbolic numbers allow to obtain the same equations for the corresponding Riemann and Lorentz surfaces, respectively. Moreover it is shown that the geodesics on the Lorentz surfaces states, from a physical point of view, a link between curvature and fields. This result is obtained just as a consequence of the space-time geometrical symmetry, without invoking the famous Einstein general relativity postulate [it
Biharmonic Submanifolds with Parallel Mean Curvature Vector in Pseudo-Euclidean Spaces
Energy Technology Data Exchange (ETDEWEB)
Fu, Yu, E-mail: yufudufe@gmail.com [Dongbei University of Finance and Economics, School of Mathematics and Quantitative Economics (China)
2013-12-15
In this paper, we investigate biharmonic submanifolds in pseudo-Euclidean spaces with arbitrary index and dimension. We give a complete classification of biharmonic spacelike submanifolds with parallel mean curvature vector in pseudo-Euclidean spaces. We also determine all biharmonic Lorentzian surfaces with parallel mean curvature vector field in pseudo-Euclidean spaces.
Biharmonic Submanifolds with Parallel Mean Curvature Vector in Pseudo-Euclidean Spaces
International Nuclear Information System (INIS)
Fu, Yu
2013-01-01
In this paper, we investigate biharmonic submanifolds in pseudo-Euclidean spaces with arbitrary index and dimension. We give a complete classification of biharmonic spacelike submanifolds with parallel mean curvature vector in pseudo-Euclidean spaces. We also determine all biharmonic Lorentzian surfaces with parallel mean curvature vector field in pseudo-Euclidean spaces
Spinors and supersymmetry in four-dimensional Euclidean space
International Nuclear Information System (INIS)
McKeon, D.G.C.; Sherry, T.N.
2001-01-01
Spinors in four-dimensional Euclidean space are treated using the decomposition of the Euclidean space SO(4) symmetry group into SU(2)xSU(2). Both 2- and 4-spinor representations of this SO(4) symmetry group are shown to differ significantly from the corresponding spinor representations of the SO(3, 1) symmetry group in Minkowski space. The simplest self conjugate supersymmetry algebra allowed in four-dimensional Euclidean space is demonstrated to be an N=2 supersymmetry algebra which resembles the N=2 supersymmetry algebra in four-dimensional Minkowski space. The differences between the two supersymmetry algebras gives rise to different representations; in particular an analysis of the Clifford algebra structure shows that the momentum invariant is bounded above by the central charges in 4dE, while in 4dM the central charges bound the momentum invariant from below. Dimensional reduction of the N=1 SUSY algebra in six-dimensional Minkowski space (6dM) to 4dE reproduces our SUSY algebra in 4dE. This dimensional reduction can be used to introduce additional generators into the SUSY algebra in 4dE. Well known interpolating maps are used to relate the N=2 SUSY algebra in 4dE derived in this paper to the N=2 SUSY algebra in 4dM. The nature of the spinors in 4dE allows us to write an axially gauge invariant model which is shown to be both Hermitian and anomaly-free. No equivalent model exists in 4dM. Useful formulae in 4dE are collected together in two appendixes
A Class of Weingarten Surfaces in Euclidean 3-Space
Directory of Open Access Journals (Sweden)
Yu Fu
2013-01-01
Full Text Available The class of biconservative surfaces in Euclidean 3-space 3 are defined in (Caddeo et al., 2012 by the equation A(grad H=-H grad H for the mean curvature function H and the Weingarten operator A. In this paper, we consider the more general case that surfaces in 3 satisfying A(grad H=kH grad H for some constant k are called generalized bi-conservative surfaces. We show that this class of surfaces are linear Weingarten surfaces. We also give a complete classification of generalized bi-conservative surfaces in 3.
Manifold learning to interpret JET high-dimensional operational space
International Nuclear Information System (INIS)
Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A
2013-01-01
In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)
Complex networks in the Euclidean space of communicability distances
Estrada, Ernesto
2012-06-01
We study the properties of complex networks embedded in a Euclidean space of communicability distances. The communicability distance between two nodes is defined as the difference between the weighted sum of walks self-returning to the nodes and the weighted sum of walks going from one node to the other. We give some indications that the communicability distance identifies the least crowded routes in networks where simultaneous submission of packages is taking place. We define an index Q based on communicability and shortest path distances, which allows reinterpreting the “small-world” phenomenon as the region of minimum Q in the Watts-Strogatz model. It also allows the classification and analysis of networks with different efficiency of spatial uses. Consequently, the communicability distance displays unique features for the analysis of complex networks in different scenarios.
Characterizations of Space Curves According to Bishop Darboux Vector in Euclidean 3-Space E3
Huseyin KOCAYIGIT; Ali OZDEMIR
2014-01-01
In this paper, we obtained some characterizations of space curves according to Bihop frame in Euclidean 3-space E3 by using Laplacian operator and Levi-Civita connection. Furthermore, we gave the general differential equations which characterize the space curves according to the Bishop Darboux vector and the normal Bishop Darboux vector.
Euclidean and Minkowski space formulations of linearized gravitational potential in various gauges
International Nuclear Information System (INIS)
Lim, S.C.
1979-01-01
We show that there exists a unitary map connecting linearized theories of gravitational potential in vacuum, formulated in various covariant gauges and noncovariant radiation gauge. The free Euclidean gravitational potentials in covariant gauges satisfy the Markov property of Nelson, but are nonreflexive. For the noncovariant radiation gauge, the corresponding Euclidean field is reflexive but it only satisfies the Markov property with respect to special half spaces. The Feynman--Kac--Nelson formula is established for the Euclidean gravitational potential in radiation gauge
Large parallel volumes of finite and compact sets in d-dimensional Euclidean space
DEFF Research Database (Denmark)
Kampf, Jürgen; Kiderlen, Markus
The r-parallel volume V (Cr) of a compact subset C in d-dimensional Euclidean space is the volume of the set Cr of all points of Euclidean distance at most r > 0 from C. According to Steiner’s formula, V (Cr) is a polynomial in r when C is convex. For finite sets C satisfying a certain geometric...
The literary uses of high-dimensional space
Directory of Open Access Journals (Sweden)
Ted Underwood
2015-12-01
Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.
Asymptotic analysis of fundamental solutions of Dirac operators on even dimensional Euclidean spaces
International Nuclear Information System (INIS)
Arai, A.
1985-01-01
We analyze the short distance asymptotic behavior of some quantities formed out of fundamental solutions of Dirac operators on even dimensional Euclidean spaces with finite dimensional matrix-valued potentials. (orig.)
Manton, Jonathan H.
2012-01-01
The Newton iteration is a popular method for minimising a cost function on Euclidean space. Various generalisations to cost functions defined on manifolds appear in the literature. In each case, the convergence rate of the generalised Newton iteration needed establishing from first principles. The present paper presents a framework for generalising iterative methods from Euclidean space to manifolds that ensures local convergence rates are preserved. It applies to any (memoryless) iterative m...
General Rotational Surfaces in Pseudo-Euclidean 4-Space with Neutral Metric
Aleksieva, Yana; Milousheva, Velichka; Turgay, Nurettin Cenk
2016-01-01
We define general rotational surfaces of elliptic and hyperbolic type in the pseudo-Euclidean 4-space with neutral metric which are analogous to the general rotational surfaces of C. Moore in the Euclidean 4-space. We study Lorentz general rotational surfaces with plane meridian curves and give the complete classification of minimal general rotational surfaces of elliptic and hyperbolic type, general rotational surfaces with parallel normalized mean curvature vector field, flat general rotati...
Data analysis in high-dimensional sparse spaces
DEFF Research Database (Denmark)
Clemmensen, Line Katrine Harder
classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...
Intrinsic Regularization in a Lorentz invariant non-orthogonal Euclidean Space
Tornow, Carmen
2006-01-01
It is shown that the Lorentz transformations can be derived for a non-orthogonal Euclidean space. In this geometry one finds the same relations of special relativity as the ones known from the orthogonal Minkowski space. In order to illustrate the advantage of a non-orthogonal Euclidean metric the two-point Green’s function at x = 0 for a self-interacting scalar field is calculated. In contrast to the Minkowski space the one loop mass correction derived from this function gives a convergent r...
Scalar Green's functions in an Euclidean space with a conical-type line singularity
International Nuclear Information System (INIS)
Guimaraes, M.E.X.; Linet, B.
1994-01-01
In an Euclidean space with a conical-type line singularity, we determine the Green's function for a charged massive scalar field interacting with a magnetic flux running through the line singularity. We give an integral expression of the Green's function and a local form in the neighbourhood of the point source, where it is the sum of the usual Green's function in Euclidean space and a regular term. As an application, we derive the vacuum energy-momentum tensor in the massless case for an arbitrary magnetic flux. (orig.)
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos
2010-01-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii
On High Dimensional Searching Spaces and Learning Methods
DEFF Research Database (Denmark)
Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz
2017-01-01
, and similarity functions and discuss the pros and cons of using each of them. Conventional similarity functions evaluate objects in the vector space. Contrarily, Weighted Feature Distance (WFD) functions compare data objects in both feature and vector spaces, preventing the system from being affected by some...
Linear embeddings of finite-dimensional subsets of Banach spaces into Euclidean spaces
International Nuclear Information System (INIS)
Robinson, James C
2009-01-01
This paper treats the embedding of finite-dimensional subsets of a Banach space B into finite-dimensional Euclidean spaces. When the Hausdorff dimension of X − X is finite, d H (X − X) k are injective on X. The proof motivates the definition of the 'dual thickness exponent', which is the key to proving that a prevalent set of such linear maps have Hölder continuous inverse when the box-counting dimension of X is finite and k > 2d B (X). A related argument shows that if the Assouad dimension of X − X is finite and k > d A (X − X), a prevalent set of such maps are bi-Lipschitz with logarithmic corrections. This provides a new result for compact homogeneous metric spaces via the Kuratowksi embedding of (X, d) into L ∞ (X)
Aspects of high-dimensional theories in embedding spaces
International Nuclear Information System (INIS)
Maia, M.D.; Mecklenburg, W.
1983-01-01
The question of whether physical meaning may be attributed to the extra dimensions provided by embedding procedures as applied to physical space-times is discussed. The similarities and differences of the present picture to that of conventional Kaluza-Klein pictures are commented. (Author) [pt
Faster exact algorithms for computing Steiner trees in higher dimensional Euclidean spaces
DEFF Research Database (Denmark)
Fonseca, Rasmus; Brazil, Marcus; Winter, Pawel
The Euclidean Steiner tree problem asks for a network of minimum total length interconnecting a finite set of points in d-dimensional space. For d ≥ 3, only one practical algorithmic approach exists for this problem --- proposed by Smith in 1992. A number of refinements of Smith's algorithm have...
Convexity and the Euclidean Metric of Space-Time
Directory of Open Access Journals (Sweden)
Nikolaos Kalogeropoulos
2017-02-01
Full Text Available We address the reasons why the “Wick-rotated”, positive-definite, space-time metric obeys the Pythagorean theorem. An answer is proposed based on the convexity and smoothness properties of the functional spaces purporting to provide the kinematic framework of approaches to quantum gravity. We employ moduli of convexity and smoothness which are eventually extremized by Hilbert spaces. We point out the potential physical significance that functional analytical dualities play in this framework. Following the spirit of the variational principles employed in classical and quantum Physics, such Hilbert spaces dominate in a generalized functional integral approach. The metric of space-time is induced by the inner product of such Hilbert spaces.
de Wit, Bernard; Reys, Valentin
2017-12-01
Supergravity with eight supercharges in a four-dimensional Euclidean space is constructed at the full non-linear level by performing an off-shell time-like reduction of five-dimensional supergravity. The resulting four-dimensional theory is realized off-shell with the Weyl, vector and tensor supermultiplets and a corresponding multiplet calculus. Hypermultiplets are included as well, but they are themselves only realized with on-shell supersymmetry. We also briefly discuss the non-linear supermultiplet. The off-shell reduction leads to a full understanding of the Euclidean theory. A complete multiplet calculus is presented along the lines of the Minkowskian theory. Unlike in Minkowski space, chiral and anti-chiral multiplets are real and supersymmetric actions are generally unbounded from below. Precisely as in the Minkowski case, where one has different formulations of Poincaré supergravity upon introducing different compensating supermultiplets, one can also obtain different versions of Euclidean supergravity.
From Euclidean to Minkowski space with the Cauchy-Riemann equations
International Nuclear Information System (INIS)
Gimeno-Segovia, Mercedes; Llanes-Estrada, Felipe J.
2008-01-01
We present an elementary method to obtain Green's functions in non-perturbative quantum field theory in Minkowski space from Green's functions calculated in Euclidean space. Since in non-perturbative field theory the analytical structure of amplitudes often is unknown, especially in the presence of confined fields, dispersive representations suffer from systematic uncertainties. Therefore, we suggest to use the Cauchy-Riemann equations, which perform the analytical continuation without assuming global information on the function in the entire complex plane, but only in the region through which the equations are solved. We use as example the quark propagator in Landau gauge quantum chromodynamics, which is known from lattice and Dyson-Schwinger studies in Euclidean space. The drawback of the method is the instability of the Cauchy-Riemann equations against high-frequency noise,which makes it difficult to achieve good accuracy. We also point out a few curious details related to the Wick rotation. (orig.)
Green's functions in Bianchi type-I spaces. Relation between Minkowski and Euclidean approaches
International Nuclear Information System (INIS)
Bukhbinder, I.L.; Kirillova, E.N.
1988-01-01
A theory is considered for a free scalar field with a conformal connection in a curved space-time with a Bianchi type-I metric. A representation is obtained for the Green's function G∼ in in in the form of an integral of a Schwinger-DeWitt kernel along a contour in a plane of complex-valued proper time. It is shown how as transition may be accomplished from Green's functions in space with the Euclidean signature to Green's functions in space with Minkowski signature and vice versa
Steiner tree heuristic in the Euclidean d-space using bottleneck distances
DEFF Research Database (Denmark)
Lorenzen, Stephan Sloth; Winter, Pawel
2016-01-01
Some of the most efficient heuristics for the Euclidean Steiner minimal tree problem in the d-dimensional space, d ≥2, use Delaunay tessellations and minimum spanning trees to determine small subsets of geometrically close terminals. Their low-cost Steiner trees are determined and concatenated...... in a greedy fashion to obtain a low cost tree spanning all terminals. The weakness of this approach is that obtained solutions are topologically related to minimum spanning trees. To avoid this and to obtain even better solutions, bottleneck distances are utilized to determine good subsets of terminals...
The stochastic versus the Euclidean approach to quantum fields on a static space-time
International Nuclear Information System (INIS)
De Angelis, G.F.; de Falco, D.
1986-01-01
Equations are presented which modify the definition of the Gaussian field in the Rindler chart in order to make contact with the Wightman state, the Hartle-Hawking state, and the Euclidean field. By taking Ornstein-Uhlenbeck processes the authors have chosen, in the sense of stochastic mechanics, to place precisely the Fulling modes in their harmonic oscillator ground state. In this respect, together with the periodicity of Minkowski space-time, the authors observe that the covariance of the Ornstein-Uhlenbeck process can be obtained by analytical continuation of the Wightman function of the harmonic oscillator at zero temperature
Derivatives, forms and vector fields on the κ-deformed Euclidean space
International Nuclear Information System (INIS)
Dimitrijevic, Marija; Moeller, Lutz; Tsouchnika, Efrossini
2004-01-01
The model of κ-deformed space is an interesting example of a noncommutative space, since it allows a deformed symmetry. In this paper, we present new results concerning different sets of derivatives on the coordinate algebra of κ-deformed Euclidean space. We introduce a differential calculus with two interesting sets of one-forms and higher-order forms. The transformation law of vector fields is constructed in accordance with the transformation behaviour of derivatives. The crucial property of the different derivatives, forms and vector fields is that in an n-dimensional spacetime there are always n of them. This is the key difference with respect to conventional approaches, in which the differential calculus is (n + 1)-dimensional. This work shows that derivative-valued quantities such as derivative-valued vector fields appear in a generic way on noncommutative spaces
A relationship between scalar Green functions on hyperbolic and Euclidean Rindler spaces
International Nuclear Information System (INIS)
Haba, Z
2007-01-01
We derive a formula connecting in any dimension the Green function on the (D + 1)-dimensional Euclidean Rindler space and the one for a minimally coupled scalar field with a mass m in the D-dimensional hyperbolic space. The relation takes a simple form in the momentum space where the Green functions are equal at the momenta (p 0 , p) for Rindler and (m,p-hat) for hyperbolic space with a simple additive relation between the squares of the mass and the momenta. The formula has applications to finite temperature Green functions, Green functions on the cone and on the (compactified) Milne spacetime. Analytic continuations and interacting quantum fields are briefly discussed
Non-Euclidean geometry and curvature two-dimensional spaces, volume 3
Cannon, James W
2017-01-01
This is the final volume of a three volume collection devoted to the geometry, topology, and curvature of 2-dimensional spaces. The collection provides a guided tour through a wide range of topics by one of the twentieth century's masters of geometric topology. The books are accessible to college and graduate students and provide perspective and insight to mathematicians at all levels who are interested in geometry and topology. Einstein showed how to interpret gravity as the dynamic response to the curvature of space-time. Bill Thurston showed us that non-Euclidean geometries and curvature are essential to the understanding of low-dimensional spaces. This third and final volume aims to give the reader a firm intuitive understanding of these concepts in dimension 2. The volume first demonstrates a number of the most important properties of non-Euclidean geometry by means of simple infinite graphs that approximate that geometry. This is followed by a long chapter taken from lectures the author gave at MSRI, wh...
Euclidean scalar Green function in a higher dimensional global monopole space-time
International Nuclear Information System (INIS)
Bezerra de Mello, E.R.
2002-01-01
We construct the explicit Euclidean scalar Green function associated with a massless field in a higher dimensional global monopole space-time, i.e., a (1+d)-space-time with d≥3 which presents a solid angle deficit. Our result is expressed in terms of an infinite sum of products of Legendre functions with Gegenbauer polynomials. Although this Green function cannot be expressed in a closed form, for the specific case where the solid angle deficit is very small, it is possible to develop the sum and obtain the Green function in a more workable expression. Having this expression it is possible to calculate the vacuum expectation value of some relevant operators. As an application of this formalism, we calculate the renormalized vacuum expectation value of the square of the scalar field, 2 (x)> Ren , and the energy-momentum tensor, μν (x)> Ren , for the global monopole space-time with spatial dimensions d=4 and d=5
Distribution of high-dimensional entanglement via an intra-city free-space link.
Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert
2017-07-24
Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei
2010-07-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial
High-dimensional free-space optical communications based on orbital angular momentum coding
Zou, Li; Gu, Xiaofan; Wang, Le
2018-03-01
In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.
Superintegrability in two-dimensional Euclidean space and associated polynomial solutions
International Nuclear Information System (INIS)
Kalnins, E.G.; Miller, W. Jr; Pogosyan, G.S.
1996-01-01
In this work we examine the basis functions for those classical and quantum mechanical systems in two dimensions which admit separation of variables in at least two coordinate systems. We do this for the corresponding systems defined in Euclidean space and on the two dimensional sphere. We present all of these cases from a unified point of view. In particular, all of the spectral functions that arise via variable separation have their essential features expressed in terms of their zeros. The principal new results are the details of the polynomial base for each of the nonsubgroup base, not just the subgroup cartesian and polar coordinate case, and the details of the structure of the quadratic algebras. We also study the polynomial eigenfunctions in elliptic coordinates of the N-dimensional isotropic quantum oscillator. 28 refs., 1 tab
New results on embeddings of polyhedra and manifolds in Euclidean spaces
International Nuclear Information System (INIS)
Repovs, D; Skopenkov, A B
1999-01-01
The aim of this survey is to present several classical results on embeddings and isotopies of polyhedra and manifolds in R m . We also describe the revival of interest in this beautiful branch of topology and give an account of new results, including an improvement of the Haefliger-Weber theorem on the completeness of the deleted product obstruction to embeddability and isotopy of highly connected manifolds in R m (Skopenkov) as well as the unimprovability of this theorem for polyhedra (Freedman, Krushkal, Teichner, Segal, Skopenkov, and Spiez) and for manifolds without the necessary connectedness assumption (Skopenkov). We show how algebraic obstructions (in terms of cohomology, characteristic classes, and equivariant maps) arise from geometric problems of embeddability in Euclidean spaces. Several classical and modern results on completeness or incompleteness of these obstructions are stated and proved. By these proofs we illustrate classical and modern tools of geometric topology (engulfing, the Whitney trick, van Kampen and Casson finger moves, and their generalizations)
Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.
Balfer, Jenny; Hu, Ye; Bajorath, Jürgen
2014-08-01
Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification
Directory of Open Access Journals (Sweden)
Yongjun Piao
2015-01-01
Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.
Classical and quantum integrability of 2D dilaton gravities in Euclidean space
International Nuclear Information System (INIS)
Bergamin, L; Grumiller, D; Kummer, W; Vassilevich, D V
2005-01-01
Euclidean dilaton gravity in two dimensions is studied exploiting its representation as a complexified first order gravity model. All local classical solutions are obtained. A global discussion reveals that for a given model only a restricted class of topologies is consistent with the metric and the dilaton. A particular case of string motivated Liouville gravity is studied in detail. Path integral quantization in generic Euclidean dilaton gravity is performed non-perturbatively by analogy to the Minkowskian case
$O(N)$ model in Euclidean de Sitter space: beyond the leading infrared approximation
Nacir, Diana López; Trombetta, Leonardo G
2016-01-01
We consider an $O(N)$ scalar field model with quartic interaction in $d$-dimensional Euclidean de Sitter space. In order to avoid the problems of the standard perturbative calculations for light and massless fields, we generalize to the $O(N)$ theory a systematic method introduced previously for a single field, which treats the zero modes exactly and the nonzero modes perturbatively. We compute the two-point functions taking into account not only the leading infrared contribution, coming from the self-interaction of the zero modes, but also corrections due to the interaction of the ultraviolet modes. For the model defined in the corresponding Lorentzian de Sitter spacetime, we obtain the two-point functions by analytical continuation. We point out that a partial resummation of the leading secular terms (which necessarily involves nonzero modes) is required to obtain a decay at large distances for massless fields. We implement this resummation along with a systematic double expansion in an effective coupling c...
Individual-based models for adaptive diversification in high-dimensional phenotype spaces.
Ispolatov, Iaroslav; Madhok, Vaibhav; Doebeli, Michael
2016-02-07
Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can be introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nam, Julia EunJu; Mueller, Klaus
2013-02-01
Gaining a true appreciation of high-dimensional space remains difficult since all of the existing high-dimensional space exploration techniques serialize the space travel in some way. This is not so foreign to us since we, when traveling, also experience the world in a serial fashion. But we typically have access to a map to help with positioning, orientation, navigation, and trip planning. Here, we propose a multivariate data exploration tool that compares high-dimensional space navigation with a sightseeing trip. It decomposes this activity into five major tasks: 1) Identify the sights: use a map to identify the sights of interest and their location; 2) Plan the trip: connect the sights of interest along a specifyable path; 3) Go on the trip: travel along the route; 4) Hop off the bus: experience the location, look around, zoom into detail; and 5) Orient and localize: regain bearings in the map. We describe intuitive and interactive tools for all of these tasks, both global navigation within the map and local exploration of the data distributions. For the latter, we describe a polygonal touchpad interface which enables users to smoothly tilt the projection plane in high-dimensional space to produce multivariate scatterplots that best convey the data relationships under investigation. Motion parallax and illustrative motion trails aid in the perception of these transient patterns. We describe the use of our system within two applications: 1) the exploratory discovery of data configurations that best fit a personal preference in the presence of tradeoffs and 2) interactive cluster analysis via cluster sculpting in N-D.
International Nuclear Information System (INIS)
Bezerra de Mello, E.R.
2006-01-01
In this paper we present, in a integral form, the Euclidean Green function associated with a massless scalar field in the five-dimensional Kaluza-Klein magnetic monopole superposed to a global monopole, admitting a nontrivial coupling between the field with the geometry. This Green function is expressed as the sum of two contributions: the first one related with uncharged component of the field, is similar to the Green function associated with a scalar field in a four-dimensional global monopole space-time. The second contains the information of all the other components. Using this Green function it is possible to study the vacuum polarization effects on this space-time. Explicitly we calculate the renormalized vacuum expectation value * (x)Φ(x)> Ren , which by its turn is also expressed as the sum of two contributions
Directory of Open Access Journals (Sweden)
L.V. Arun Shalin
2016-01-01
Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization
Wang, Yong-Long; Jiang, Hua; Zong, Hong-Shi
2017-08-01
In the spirit of the thin-layer quantization approach, we give the formula of the geometric influences of a particle confined to a curved surface embedded in three-dimensional Euclidean space. The geometric contributions can result from the reduced commutation relation between the acted function depending on normal variable and the normal derivative. According to the formula, we obtain the geometric potential, geometric momentum, geometric orbital angular momentum, geometric linear Rashba, and cubic Dresselhaus spin-orbit couplings. As an example, a truncated cone surface is considered. We find that the geometric orbital angular momentum can provide an azimuthal polarization for spin, and the sign of the geometric Dresselhaus spin-orbit coupling can be flipped through the inclination angle of generatrix.
Biess, Armin
2013-01-01
The study of the kinematic and dynamic features of human arm movements provides insights into the computational strategies underlying human motor control. In this paper a differential geometric approach to movement control is taken by endowing arm configuration space with different non-Euclidean metric structures to study the predictions of the generalized minimum-jerk (MJ) model in the resulting Riemannian manifold for different types of human arm movements. For each metric space the solution of the generalized MJ model is given by reparametrized geodesic paths. This geodesic model is applied to a variety of motor tasks ranging from three-dimensional unconstrained movements of a four degree of freedom arm between pointlike targets to constrained movements where the hand location is confined to a surface (e.g., a sphere) or a curve (e.g., an ellipse). For the latter speed-curvature relations are derived depending on the boundary conditions imposed (periodic or nonperiodic) and the compatibility with the empirical one-third power law is shown. Based on these theoretical studies and recent experimental findings, I argue that geodesics may be an emergent property of the motor system and that the sensorimotor system may shape arm configuration space by learning metric structures through sensorimotor feedback.
Extending the Generalised Pareto Distribution for Novelty Detection in High-Dimensional Spaces.
Clifton, David A; Clifton, Lei; Hugueny, Samuel; Tarassenko, Lionel
2014-01-01
Novelty detection involves the construction of a "model of normality", and then classifies test data as being either "normal" or "abnormal" with respect to that model. For this reason, it is often termed one-class classification. The approach is suitable for cases in which examples of "normal" behaviour are commonly available, but in which cases of "abnormal" data are comparatively rare. When performing novelty detection, we are typically most interested in the tails of the normal model, because it is in these tails that a decision boundary between "normal" and "abnormal" areas of data space usually lies. Extreme value statistics provides an appropriate theoretical framework for modelling the tails of univariate (or low-dimensional) distributions, using the generalised Pareto distribution (GPD), which can be demonstrated to be the limiting distribution for data occurring within the tails of most practically-encountered probability distributions. This paper provides an extension of the GPD, allowing the modelling of probability distributions of arbitrarily high dimension, such as occurs when using complex, multimodel, multivariate distributions for performing novelty detection in most real-life cases. We demonstrate our extension to the GPD using examples from patient physiological monitoring, in which we have acquired data from hospital patients in large clinical studies of high-acuity wards, and in which we wish to determine "abnormal" patient data, such that early warning of patient physiological deterioration may be provided.
Improved Epstein-Glaser Renormalization in Coordinate Space I. Euclidean Framework
International Nuclear Information System (INIS)
Gracia-Bondia, Jose M.
2003-01-01
In a series of papers, we investigate the reformulation of Epstein-Glaser renormalization in coordinate space, both in analytic and (Hopf) algebraic terms. This first article deals with analytical aspects. Some of the (historically good) reasons for the divorces of the Epstein-Glaser method, both from mainstream quantum field theory and the mathematical literature on distributions, are made plain; and overcome
Du, Jing; Wang, Jian
2015-11-01
Bessel beams carrying orbital angular momentum (OAM) with helical phase fronts exp(ilφ)(l=0;±1;±2;…), where φ is the azimuthal angle and l corresponds to the topological number, are orthogonal with each other. This feature of Bessel beams provides a new dimension to code/decode data information on the OAM state of light, and the theoretical infinity of topological number enables possible high-dimensional structured light coding/decoding for free-space optical communications. Moreover, Bessel beams are nondiffracting beams having the ability to recover by themselves in the face of obstructions, which is important for free-space optical communications relying on line-of-sight operation. By utilizing the OAM and nondiffracting characteristics of Bessel beams, we experimentally demonstrate 12 m distance obstruction-free optical m-ary coding/decoding using visible Bessel beams in a free-space optical communication system. We also study the bit error rate (BER) performance of hexadecimal and 32-ary coding/decoding based on Bessel beams with different topological numbers. After receiving 500 symbols at the receiver side, a zero BER of hexadecimal coding/decoding is observed when the obstruction is placed along the propagation path of light.
Wang, Wei; Yang, Jiong
With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.
Kulczycki, Stefan
2008-01-01
This accessible approach features two varieties of proofs: stereometric and planimetric, as well as elementary proofs that employ only the simplest properties of the plane. A short history of geometry precedes a systematic exposition of the principles of non-Euclidean geometry.Starting with fundamental assumptions, the author examines the theorems of Hjelmslev, mapping a plane into a circle, the angle of parallelism and area of a polygon, regular polygons, straight lines and planes in space, and the horosphere. Further development of the theory covers hyperbolic functions, the geometry of suff
Learning Euclidean Embeddings for Indexing and Classification
National Research Council Canada - National Science Library
Athitsos, Vassilis; Alon, Joni; Sclaroff, Stan; Kollios, George
2004-01-01
BoostMap is a recently proposed method for efficient approximate nearest neighbor retrieval in arbitrary non-Euclidean spaces with computationally expensive and possibly non-metric distance measures...
Spacetime and Euclidean geometry
Brill, Dieter; Jacobson, Ted
2006-04-01
Using only the principle of relativity and Euclidean geometry we show in this pedagogical article that the square of proper time or length in a two-dimensional spacetime diagram is proportional to the Euclidean area of the corresponding causal domain. We use this relation to derive the Minkowski line element by two geometric proofs of the spacetime Pythagoras theorem.
Variance inflation in high dimensional Support Vector Machines
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2013-01-01
Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...
Micrononcasual Euclidean wave functions
International Nuclear Information System (INIS)
Enatsu, H.; Takenaka, A.; Okazaki, M.
1978-01-01
A theory which describes the internal attributes of hadrons in terms of space-time wave functions is presented. In order to develop the theory on the basis of a rather realistic model, covariant wave equations are first derived for the deuteron, in which the co-ordinates of the centre of mass of two nucleons can be defined unambiguously. Then the micro-noncasual behaviour of virtual mesons mediating between the two nucleons is expressed by means of wave functions depending only on the relative Euclidean co-ordinates with respect to the centre of mass of the two nucleons; the wave functions are assumed to obey the 0 4 and SU 2 x SU 2 groups. The properties of the wave functions under space inversion, time reversal and particle-antiparticle conjugation are investigated. It is found that the internal attributes of the mesons, such as spin, isospin, strangeness, intrinsic parity, charge parity and G-parity are explained consistently. The theory is applicable also to the case of baryons
Local algebras in Euclidean quantum field theory
International Nuclear Information System (INIS)
Guerra, Francesco.
1975-06-01
The general structure of the local observable algebras of Euclidean quantum field theory is described, considering the very simple examples of the free scalar field, the vector meson field, and the electromagnetic field. The role of Markov properties, and the relations between Euclidean theory and Hamiltonian theory in Minkowski space-time are especially emphasized. No conflict appears between covariance (in the Euclidean sense) and locality (in the Markov sense) on one hand and positive definiteness of the metric on the other hand [fr
Trudeau, Richard J
1986-01-01
How unique and definitive is Euclidean geometry in describing the "real" space in which we live? Richard Trudeau confronts the fundamental question of truth and its representation through mathematical models in The Non-Euclidean Revolution. First, the author analyzes geometry in its historical and philosophical setting; second, he examines a revolution every bit as significant as the Copernican revolution in astronomy and the Darwinian revolution in biology; third, on the most speculative level, he questions the possibility of absolute knowledge of the world. Trudeau writes in a lively, entertaining, and highly accessible style. His book provides one of the most stimulating and personal presentations of a struggle with the nature of truth in mathematics and the physical world. A portion of the book won the Pólya Prize, a distinguished award from the Mathematical Association of America. "Trudeau meets the challenge of reaching a broad audience in clever ways...(The book) is a good addition to our literature o...
Energy Technology Data Exchange (ETDEWEB)
Oblakov, Konstantin I; Oblakova, Tat' yana A [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)
2012-10-31
The paper is devoted to the characteristic of a graph that is the minimal (over all embeddings of the graph into a space of given dimension) number of points that belong to the same hyperplane. Upper and lower estimates for this number are given that linearly depend on the dimension of the space. For trees a more precise upper estimate is obtained, which asymptotically coincides with the lower one for large dimension of the space. Bibliography: 9 titles.
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Directory of Open Access Journals (Sweden)
Wei Ji Ma
Full Text Available Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness, one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Phylogenetic trees and Euclidean embeddings.
Layer, Mark; Rhodes, John A
2017-01-01
It was recently observed by de Vienne et al. (Syst Biol 60(6):826-832, 2011) that a simple square root transformation of distances between taxa on a phylogenetic tree allowed for an embedding of the taxa into Euclidean space. While the justification for this was based on a diffusion model of continuous character evolution along the tree, here we give a direct and elementary explanation for it that provides substantial additional insight. We use this embedding to reinterpret the differences between the NJ and BIONJ tree building algorithms, providing one illustration of how this embedding reflects tree structures in data.
On the scaling limits in the Euclidean (quantum) field theory
International Nuclear Information System (INIS)
Gielerak, R.
1983-01-01
The author studies the concept of scaling limits in the context of the constructive field theory. He finds that the domain of attraction of a free massless Euclidean scalar field in the two-dimensional space time contains almost all Euclidean self-interacting models of quantum fields so far constructed. The renormalized scaling limit of the Wick polynomials of several self-interacting Euclidean field theory models are shown to be the same as in the free field theory. (Auth.)
Draisma, J.; Horobet, E.; Ottaviani, G.; Sturmfels, B.; Thomas, R.R.; Zhi, L.; Watt, M.
2014-01-01
The nearest point map of a real algebraic variety with respect to Euclidean distance is an algebraic function. For instance, for varieties of low rank matrices, the Eckart-Young Theorem states that this map is given by the singular value decomposition. This article develops a theory of such nearest
Coxeter, HSM
1965-01-01
This textbook introduces non-Euclidean geometry, and the third edition adds a new chapter, including a description of the two families of 'mid-lines' between two given lines and an elementary derivation of the basic formulae of spherical trigonometry and hyperbolic trigonometry, and other new material.
Clustering high dimensional data
DEFF Research Database (Denmark)
Assent, Ira
2012-01-01
High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...
The relation between Euclidean and Lorentzian 2D quantum gravity
Ambjørn, J.; Correia, J.; Kristjansen, C.; Loll, R.
1999-01-01
Starting from 2D Euclidean quantum gravity, we show that one recovers 2D Lorentzian quantum gravity by removing all baby universes. Using a peeling procedure to decompose the discrete, triangulated geometries along a one-dimensional path, we explicitly associate with each Euclidean space-time a
Topological methods in Euclidean spaces
Naber, Gregory L
2000-01-01
Extensive development of a number of topics central to topology, including elementary combinatorial techniques, Sperner's Lemma, the Brouwer Fixed Point Theorem, homotopy theory and the fundamental group, simplicial homology theory, the Hopf Trace Theorem, the Lefschetz Fixed Point Theorem, the Stone-Weierstrass Theorem, and Morse functions. Includes new section of solutions to selected problems.
CSIR Research Space (South Africa)
Mc
2012-07-01
Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...
Spinors in euclidean field theory, complex structures and discrete symmetries
International Nuclear Information System (INIS)
Wetterich, C.
2011-01-01
We discuss fermions for arbitrary dimensions and signature of the metric, with special emphasis on euclidean space. Generalized Majorana spinors are defined for d=2,3,4,8,9mod8, independently of the signature. These objects permit a consistent analytic continuation of Majorana spinors in Minkowski space to euclidean signature. Compatibility of charge conjugation with complex conjugation requires for euclidean signature a new complex structure which involves a reflection in euclidean time. The possible complex structures for Minkowski and euclidean signature can be understood in terms of a modulo two periodicity in the signature. The concepts of a real action and hermitean observables depend on the choice of the complex structure. For a real action the expectation values of all hermitean multi-fermion observables are real. This holds for arbitrary signature, including euclidean space. In particular, a chemical potential is compatible with a real action for the euclidean theory. We also discuss the discrete symmetries of parity, time reversal and charge conjugation for arbitrary dimension and signature.
Non-euclidean simplex optimization
International Nuclear Information System (INIS)
Silver, G.L.
1977-01-01
Geometric optimization techniques useful for studying chemical equilibrium traditionally rely upon principles of euclidean geometry, but such algorithms may also be based upon principles of a non-euclidean geometry. The sequential simplex method is adapted to the hyperbolic plane, and application of optimization to problems such as the potentiometric titration of plutonium is suggested
Broadband invisibility by non-Euclidean cloaking.
Leonhardt, Ulf; Tyc, Tomás
2009-01-02
Invisibility and negative refraction are both applications of transformation optics where the material of a device performs a coordinate transformation for electromagnetic fields. The device creates the illusion that light propagates through empty flat space, whereas in physical space, light is bent around a hidden interior or seems to run backward in space or time. All of the previous proposals for invisibility require materials with extreme properties. Here we show that transformation optics of a curved, non-Euclidean space (such as the surface of a virtual sphere) relax these requirements and can lead to invisibility in a broad band of the spectrum.
Biased discriminant euclidean embedding for content-based image retrieval.
Bian, Wei; Tao, Dacheng
2010-02-01
With many potential multimedia applications, content-based image retrieval (CBIR) has recently gained more attention for image management and web search. A wide variety of relevance feedback (RF) algorithms have been developed in recent years to improve the performance of CBIR systems. These RF algorithms capture user's preferences and bridge the semantic gap. However, there is still a big room to further the RF performance, because the popular RF algorithms ignore the manifold structure of image low-level visual features. In this paper, we propose the biased discriminative Euclidean embedding (BDEE) which parameterises samples in the original high-dimensional ambient space to discover the intrinsic coordinate of image low-level visual features. BDEE precisely models both the intraclass geometry and interclass discrimination and never meets the undersampled problem. To consider unlabelled samples, a manifold regularization-based item is introduced and combined with BDEE to form the semi-supervised BDEE, or semi-BDEE for short. To justify the effectiveness of the proposed BDEE and semi-BDEE, we compare them against the conventional RF algorithms and show a significant improvement in terms of accuracy and stability based on a subset of the Corel image gallery.
Matrices and Graphs in Euclidean Geometry
Czech Academy of Sciences Publication Activity Database
Fiedler, Miroslav
2005-01-01
Roč. 14, - (2005), s. 51-58 E-ISSN 1081-3810 R&D Projects: GA AV ČR IAA1030302 Institutional research plan: CEZ:AV0Z10300504 Keywords : Euclidean space * Gram matrix * biorthogonal bases * simplex * interior angle * Steiner circumscribed ellipsoid * right simplex Subject RIV: BA - General Mathematics http://www.math.technion.ac.il/iic/ ela / ela -articles/14.html
Chernozhukov, Victor; Hansen, Christian; Spindler, Martin
2016-01-01
In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...
Non-Euclidean Geometry and Gravitation
Directory of Open Access Journals (Sweden)
Stavroulakis N.
2006-04-01
Full Text Available A great deal of misunderstandings and mathematical errors are involved in the currently accepted theory of the gravitational field generated by an isotropic spherical mass. The purpose of the present paper is to provide a short account of the rigorous mathematical theory and exhibit a new formulation of the problem. The solution of the corresponding equations of gravitation points out several new and unusual features of the stationary gravitational field which are related to the non-Euclidean structure of the space. Moreover it precludes the black hole from being a mathematical and physical notion.
Fast Exact Euclidean Distance (FEED) Transformation
Schouten, Theo; Kittler, J.; van den Broek, Egon; Petrou, M.; Nixon, M.
2004-01-01
Fast Exact Euclidean Distance (FEED) transformation is introduced, starting from the inverse of the distance transformation. The prohibitive computational cost of a naive implementation of traditional Euclidean Distance Transformation, is tackled by three operations: restriction of both the number
Growth Modeling of Human Mandibles using Non-Euclidean Metrics
DEFF Research Database (Denmark)
Hilger, Klaus Baggesen; Larsen, Rasmus; Wrobel, Mark
2003-01-01
From a set of 31 three-dimensional CT scans we model the temporal shape and size of the human mandible. Each anatomical structure is represented using 14851 semi-landmarks, and mapped into Procrustes tangent space. Exploratory subspace analyses are performed leading to linear models of mandible...... shape evolution in Procrustes space. The traditional variance analysis results in a one-dimensional growth model. However, working in a non-Euclidean metric results in a multimodal model with uncorrelated modes of biological variation. The applied non-Euclidean metric is governed by the correlation...... structure of the estimated noise in the data. The generative models are compared, and evaluated on the basis of a cross validation study. The new non-Euclidean analysis is completely data driven. It not only gives comparable results w.r.t. to previous studies of the mean modelling error, but in addition...
Flexible intuitions of Euclidean geometry in an Amazonian indigene group
Izard, Véronique; Pica, Pierre; Spelke, Elizabeth S.; Dehaene, Stanislas
2011-01-01
Kant argued that Euclidean geometry is synthesized on the basis of an a priori intuition of space. This proposal inspired much behavioral research probing whether spatial navigation in humans and animals conforms to the predictions of Euclidean geometry. However, Euclidean geometry also includes concepts that transcend the perceptible, such as objects that are infinitely small or infinitely large, or statements of necessity and impossibility. We tested the hypothesis that certain aspects of nonperceptible Euclidian geometry map onto intuitions of space that are present in all humans, even in the absence of formal mathematical education. Our tests probed intuitions of points, lines, and surfaces in participants from an indigene group in the Amazon, the Mundurucu, as well as adults and age-matched children controls from the United States and France and younger US children without education in geometry. The responses of Mundurucu adults and children converged with that of mathematically educated adults and children and revealed an intuitive understanding of essential properties of Euclidean geometry. For instance, on a surface described to them as perfectly planar, the Mundurucu's estimations of the internal angles of triangles added up to ∼180 degrees, and when asked explicitly, they stated that there exists one single parallel line to any given line through a given point. These intuitions were also partially in place in the group of younger US participants. We conclude that, during childhood, humans develop geometrical intuitions that spontaneously accord with the principles of Euclidean geometry, even in the absence of training in mathematics. PMID:21606377
Euclidean geometry and its subgeometries
Specht, Edward John; Calkins, Keith G; Rhoads, Donald H
2015-01-01
In this monograph, the authors present a modern development of Euclidean geometry from independent axioms, using up-to-date language and providing detailed proofs. The axioms for incidence, betweenness, and plane separation are close to those of Hilbert. This is the only axiomatic treatment of Euclidean geometry that uses axioms not involving metric notions and that explores congruence and isometries by means of reflection mappings. The authors present thirteen axioms in sequence, proving as many theorems as possible at each stage and, in the process, building up subgeometries, most notably the Pasch and neutral geometries. Standard topics such as the congruence theorems for triangles, embedding the real numbers in a line, and coordinatization of the plane are included, as well as theorems of Pythagoras, Desargues, Pappas, Menelaus, and Ceva. The final chapter covers consistency and independence of axioms, as well as independence of definition properties. There are over 300 exercises; solutions to many of the...
Algebraic Methods for Counting Euclidean Embeddings of Rigid Graphs
I.Z. Emiris; E.P. Tsigaridas; A. Varvitsiotis (Antonios); E.R. Gasner
2009-01-01
textabstract The study of (minimally) rigid graphs is motivated by numerous applications, mostly in robotics and bioinformatics. A major open problem concerns the number of embeddings of such graphs, up to rigid motions, in Euclidean space. We capture embeddability by polynomial systems
Fisher type inequalities for Euclidean t-designs
Delsarte, Ph.; Seidel, J.J.
1989-01-01
The notion of a Euclidean t-design is analyzed in the framework of appropriate inner product spaces of polynomial functions. Some Fisher type inequalities are obtained in a simple manner by this method. The same approach is used to deal with certain analogous combinatorial designs.
Directory of Open Access Journals (Sweden)
Malloy Vanja
2013-09-01
Full Text Available John Keats once wrote that ‘there is no such thing as time and space’ rather, believing that time and space are mental constructs that are subject to a variety of forms and as diverse as the human mind. In the 1920s through the 1930s, modern physics in many ways supported this idea through the various philosophical writings on the Theory of General Relativity to the masses by scientists such as Arthur Eddington and Albert Einstein. These new concepts of modern physics fundamentally changed our understanding of time and space and had substantial philosophical implications, which were absorbed by modern artists resulting in the 1936 Dimensionist Manifesto. Seeking to internalize the developments of modern science within modern art, this manifesto was widely endorsed by the most prominent figures of the avant-garde such as Marcel Duchamp, Jean Arp, Naum Gabo, Joan Miró, László Moholy-Nagy, Wassily Kandinsky and Alexander Calder. Of particular interest to this manifesto was the new concept of the fourth-dimension, which in many ways revolutionized the arts. Importantly, its interpretation varied widely in the artistic community, ranging from a purely physical four-dimensional space, to a kinetic concept of space in which space and time are linked, to a metaphysical interest in a space that exists beyond the material realm. The impact of modern science and astronomy on avant-garde art is currently a bourgeoning area of research with considerable implications to our rethinking of substantial artistic figures of this era. Through a case study of Alexander Calder’s Mobiles and Ben Nicholson’s Reliefs, this paper explores how these artworks were informed by an interest in modern science.
Some nonunitary, indecomposable representations of the Euclidean algebra e(3)
International Nuclear Information System (INIS)
Douglas, Andrew; De Guise, Hubert
2010-01-01
The Euclidean group E(3) is the noncompact, semidirect product group E(3)≅R 3 x SO(3). It is the Lie group of orientation-preserving isometries of three-dimensional Euclidean space. The Euclidean algebra e(3) is the complexification of the Lie algebra of E(3). We construct three distinct families of finite-dimensional, nonunitary representations of e(3) and show that each representation is indecomposable. The representations of the first family are explicitly realized as subspaces of the polynomial ring F[X,Y,Z] with the action of e(3) given by differential operators. The other families are constructed via duals and tensor products of the representations within the first family. We describe subrepresentations, quotients and duals of these indecomposable representations.
High-dimensional covariance estimation with high-dimensional data
Pourahmadi, Mohsen
2013-01-01
Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac
Euclidean distance degrees of real algebraic groups
Baaijens, J.A.; Draisma, J.
2015-01-01
We study the problem of finding, in a real algebraic matrix group, the matrix closest to a given data matrix. We do so from the algebro-geometric perspective of Euclidean distance degrees. We recover several classical results; and among the new results that we prove is a formula for the Euclidean
Euclidean distance degrees of real algebraic groups
Baaijens, J.A.; Draisma, J.
2014-01-01
We study the problem of finding, in a real algebraic matrix group, the matrix closest to a given data matrix. We do so from the algebro-geometric perspective of Euclidean distance degrees. We recover several classical results; and among the new results that we prove is a formula for the Euclidean
Curves of restricted type in euclidean spaces
Directory of Open Access Journals (Sweden)
Bengü Kılıç Bayram
2014-01-01
Full Text Available Submanifolds of restricted type were introduced in [7]. In the present study we consider restricted type of curves in Em. We give some special examples. We also show that spherical curve in S2(r C E3 is of restricted type if and only if either ƒ(s is constant or a linear function of s of the form ƒ(s = ±s + b and every closed W - curve of rank k and of length 2(r in E2k is of restricted type.
Modeling high dimensional multichannel brain signals
Hu, Lechuan
2017-03-27
In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.
Modeling high dimensional multichannel brain signals
Hu, Lechuan; Fortin, Norbert; Ombao, Hernando
2017-01-01
In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.
Euclidean to Minkowski Bethe-Salpeter amplitude and observables
International Nuclear Information System (INIS)
Carbonell, J.; Frederico, T.; Karmanov, V.A.
2017-01-01
We propose a method to reconstruct the Bethe-Salpeter amplitude in Minkowski space given the Euclidean Bethe-Salpeter amplitude - or alternatively the light-front wave function - as input. The method is based on the numerical inversion of the Nakanishi integral representation and computing the corresponding weight function. This inversion procedure is, in general, rather unstable, and we propose several ways to considerably reduce the instabilities. In terms of the Nakanishi weight function, one can easily compute the BS amplitude, the LF wave function and the electromagnetic form factor. The latter ones are very stable in spite of residual instabilities in the weight function. This procedure allows both, to continue the Euclidean BS solution in the Minkowski space and to obtain a BS amplitude from a LF wave function. (orig.)
Tunneling in expanding Universe: Euclidean and Hamiltonian approaches
International Nuclear Information System (INIS)
Goncharov, A.S.; Linde, A.D.
1986-01-01
The theory of the false vacuum decay in de Sitter space and in the inflationary Universe, and also the theory of the Universe creation ''from nothing'' are discussed. This explained why tunneling in the inflationary Universe differs from that in de Sitter space and cannot be exactly homogeneous. It is shown that in several important cases the Euclidean approach should be considerably modified or is absolutely inapplicable for the description of tunneling in the expanding Universe and of the process of the quantum creation of the Universe. The Hamiltonian approach to the theory of tunneling in expanding Universe is developed. The results obtained by this method are compared with the results obtained by the Euclidean approach
Euclidean to Minkowski Bethe-Salpeter amplitude and observables
Energy Technology Data Exchange (ETDEWEB)
Carbonell, J. [Universite Paris-Sud, IN2P3-CNRS, Institut de Physique Nucleaire, Orsay Cedex (France); Frederico, T. [Instituto Tecnologico de Aeronautica, DCTA, Sao Jose dos Campos (Brazil); Karmanov, V.A. [Lebedev Physical Institute, Moscow (Russian Federation)
2017-01-15
We propose a method to reconstruct the Bethe-Salpeter amplitude in Minkowski space given the Euclidean Bethe-Salpeter amplitude - or alternatively the light-front wave function - as input. The method is based on the numerical inversion of the Nakanishi integral representation and computing the corresponding weight function. This inversion procedure is, in general, rather unstable, and we propose several ways to considerably reduce the instabilities. In terms of the Nakanishi weight function, one can easily compute the BS amplitude, the LF wave function and the electromagnetic form factor. The latter ones are very stable in spite of residual instabilities in the weight function. This procedure allows both, to continue the Euclidean BS solution in the Minkowski space and to obtain a BS amplitude from a LF wave function. (orig.)
Dynamic hyperbolic geometry: building intuition and understanding mediated by a Euclidean model
Moreno-Armella, Luis; Brady, Corey; Elizondo-Ramirez, Rubén
2018-05-01
This paper explores a deep transformation in mathematical epistemology and its consequences for teaching and learning. With the advent of non-Euclidean geometries, direct, iconic correspondences between physical space and the deductive structures of mathematical inquiry were broken. For non-Euclidean ideas even to become thinkable the mathematical community needed to accumulate over twenty centuries of reflection and effort: a precious instance of distributed intelligence at the cultural level. In geometry education after this crisis, relations between intuitions and geometrical reasoning must be established philosophically, rather than taken for granted. One approach seeks intuitive supports only for Euclidean explorations, viewing non-Euclidean inquiry as fundamentally non-intuitive in nature. We argue for moving beyond such an impoverished approach, using dynamic geometry environments to develop new intuitions even in the extremely challenging setting of hyperbolic geometry. Our efforts reverse the typical direction, using formal structures as a source for a new family of intuitions that emerge from exploring a digital model of hyperbolic geometry. This digital model is elaborated within a Euclidean dynamic geometry environment, enabling a conceptual dance that re-configures Euclidean knowledge as a support for building intuitions in hyperbolic space-intuitions based not directly on physical experience but on analogies extending Euclidean concepts.
Euclidean scalar field theory in the bilocal approximation
Nagy, S.; Polonyi, J.; Steib, I.
2018-04-01
The blocking step of the renormalization group method is usually carried out by restricting it to fluctuations and to local blocked action. The tree-level, bilocal saddle point contribution to the blocking, defined by the infinitesimal decrease of the sharp cutoff in momentum space, is followed within the three dimensional Euclidean ϕ6 model in this work. The phase structure is changed, new phases and relevant operators are found, and certain universality classes are restricted by the bilocal saddle point.
General Nth order integrals of motion in the Euclidean plane
International Nuclear Information System (INIS)
Post, S; Winternitz, P
2015-01-01
The general form of an integral of motion that is a polynomial of order N in the momenta is presented for a Hamiltonian system in two-dimensional Euclidean space. The classical and the quantum cases are treated separately, emphasizing both the similarities and the differences between the two. The main application will be to study Nth order superintegrable systems that allow separation of variables in the Hamilton–Jacobi and Schrödinger equations, respectively. (paper)
Euclidean supersymmetry, twisting and topological sigma models
International Nuclear Information System (INIS)
Hull, C.M.; Lindstroem, U.; Santos, L. Melo dos; Zabzine, M.; Unge, R. von
2008-01-01
We discuss two dimensional N-extended supersymmetry in Euclidean signature and its R-symmetry. For N = 2, the R-symmetry is SO(2) x SO(1, 1), so that only an A-twist is possible. To formulate a B-twist, or to construct Euclidean N = 2 models with H-flux so that the target geometry is generalised Kahler, it is necessary to work with a complexification of the sigma models. These issues are related to the obstructions to the existence of non-trivial twisted chiral superfields in Euclidean superspace.
The elements of non-Euclidean geometry
Sommerville, D MY
2012-01-01
Renowned for its lucid yet meticulous exposition, this classic allows students to follow the development of non-Euclidean geometry from a fundamental analysis of the concept of parallelism to more advanced topics. 1914 edition. Includes 133 figures.
Scaling limits of Euclidean quantum fields
International Nuclear Information System (INIS)
Enss, V.
1981-01-01
The author studies the long-distance and short-distance behaviour of generalized random processes which arise in Euclidean Boson field theories. Among them are Wick-polynomials of free fields and P(PHI) 2 -models. (Auth.)
Axioms for Euclidean Green's functions. Pt. 2
International Nuclear Information System (INIS)
Osterwalder, K.; Schrader, R.
1975-01-01
We give new (necessary and) sufficient conditions for Euclidean Green's functions to have analytic continuations to a relativistic field theory. These results extend and correct a previous paper. (orig.) [de
Constructive curves in non-Euclidean planes
Horváth, Ákos G.
2016-01-01
In this paper we overview the theory of conics and roulettes in four non-Euclidean planes. We collect the literature about these classical concepts, from the eighteenth century to the present, including papers available only on arXiv. The comparison of the four non-Euclidean planes, in terms of the known results on conics and roulettes, reflects only the very subjective view of the author.
Classical geometry Euclidean, transformational, inversive, and projective
Leonard, I E; Liu, A C F; Tokarsky, G W
2014-01-01
Features the classical themes of geometry with plentiful applications in mathematics, education, engineering, and science Accessible and reader-friendly, Classical Geometry: Euclidean, Transformational, Inversive, and Projective introduces readers to a valuable discipline that is crucial to understanding bothspatial relationships and logical reasoning. Focusing on the development of geometric intuitionwhile avoiding the axiomatic method, a problem solving approach is encouraged throughout. The book is strategically divided into three sections: Part One focuses on Euclidean geometry, which p
Chernozhukov, Victor; Hansen, Chris; Spindler, Martin
2016-01-01
The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...
Euclidean D-branes and higher-dimensional gauge theory
International Nuclear Information System (INIS)
Acharya, B.S.; Figueroa-O'Farrill, J.M.; Spence, B.; O'Loughlin, M.
1997-07-01
We consider euclidean D-branes wrapping around manifolds of exceptional holonomy in dimensions seven and eight. The resulting theory on the D-brane-that is, the dimensional reduction of 10-dimensional supersymmetric Yang-Mills theory-is a cohomological field theory which describes the topology of the moduli space of instantons. The 7-dimensional theory is an N T =2 (or balanced) cohomological theory given by an action potential of Chern-Simons type. As a by-product of this method, we construct a related cohomological field theory which describes the monopole moduli space on a 7-manifold of G 2 holonomy. (author). 22 refs, 3 tabs
Convergent perturbation expansions for Euclidean quantum field theory
International Nuclear Information System (INIS)
Mack, G.; Pordt, A.
1984-09-01
Mayer perturbation theory is designed to provide computable convergent expansions which permit calculation of Greens functions in Euclidean Quantum Field Theory to arbitrary accuracy, including 'nonperturbative' contributions from large field fluctuations. Here we describe the expansions at the example of 3-dimensional lambdaphi 4 -theory (in continuous space). They are not essentially more complicated than standard perturbation theory. The n-th order term is expressed in terms of 0(n)-dimensional integrals, and is of order lambda 4 if 4k-3<=n<=4k. (orig.)
The G_Newton --> 0 Limit of Euclidean Quantum Gravity
Smolin, Lee
1992-01-01
Using the Ashtekar formulation, it is shown that the G_{Newton} --> 0 limit of Euclidean or complexified general relativity is not a free field theory, but is a theory that describes a linearized self-dual connection propagating on an arbitrary anti-self-dual background. This theory is quantized in the loop representation and, as in the full theory, an infinite dimnensional space of exact solutions to the constraint is found. An inner product is also proposed. The path integral is constructed...
A strong coupling simulation of Euclidean quantum gravity
International Nuclear Information System (INIS)
Berg, B.; Hamburg Univ.
1984-12-01
Relying on Regge calculus a systematic numerical investigation of models of 4d Euclidean gravity is proposed. The scale a = 1 0 is set by fixing the expectation value of a length. Possible universality of such models is discussed. The strong coupling limit is defined by taking Planck mass msub(p) -> 0 (in units of 1 0 -1 ). The zero order approximation msub(p) = 0 is called 'fluctuating space' and investigated numerically in two 4d models. Canonical dimensions are realized and both models give a negative expectation value for the scalar curvature density. (orig.)
Modeling High-Dimensional Multichannel Brain Signals
Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando
2017-01-01
aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel
Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.
Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli
2016-05-01
Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.
Visual Analytics for Exploration of a High-Dimensional Structure
2013-04-01
5 Figure 3. Comparison of Euclidean vs. geodesic distance. LDRs use ...manifold, whereas an LDR fails. ...........................6 Figure 4. WEKA GUI for data mining HDD using FRFS-ACO...of Euclidean vs. geodesic distance. LDRs use metrics based on the Euclidean distance between two points, while the NLDRs are based on geodesic
Lorentz violations and Euclidean signature metrics
International Nuclear Information System (INIS)
Barbero G, J. Fernando; Villasenor, Eduardo J.S.
2003-01-01
We show that the families of effective actions considered by Jacobson et al. to study Lorentz invariance violations contain a class of models that represent pure general relativity with a Euclidean signature. We also point out that some members of this family of actions preserve Lorentz invariance in a generalized sense
Majorization in Euclidean Geometry and Beyond
Czech Academy of Sciences Publication Activity Database
Fiedler, Miroslav
2015-01-01
Roč. 466, 1 February (2015), s. 233-240 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : Majorization * Doubly stochastic matrix * Euclidean simplex * Star * Regular simplex * Volume of a simplex Subject RIV: BA - General Mathematics Impact factor: 0.965, year: 2015
On the Schroedinger representation of the Euclidean quantum field theory
International Nuclear Information System (INIS)
Semmler, U.
1987-04-01
The theme of the present thesis is the Schroedinger representation of the Euclidean quantum field theory: We define the time development of the quantum field states as functional integral in a novel, mathematically precise way. In the following we discuss the consequences which result from this approach to the Euclidean quantum field theory. Chapter 1 introduces the theory of abstract Wiener spaces which is here proved as suitable mathematical tool for the treatment of the physical problems. In chapter 2 the diffusion theory is formulated in the framework of abstract Wiener spaces. In chapter 3 we define the field functional ψ 5 u, t 7 as functional integral, determine the functional differential equation which ψ satisfies (Schroedinger equation), and summarize the consequences resulting from this. Chapter 4 is dedicated to the attempt to determine the kernel of the time-development operator, by the knowledge of which the time development of each initial state is fixed. In chapter 5 the consequences of the theory presented in chapter 3 and 4 are discussed by means of simple examples. In chapter 6 the renormalization which results for the φ 4 potential from the definition of the functional integral in chapter 3 is calculated up to the first-order perturbation theory, and it is shown that the problems in the Symanzik renormalization procedure can be removed. (orig./HSI) [de
A linear-time algorithm for Euclidean feature transform sets
Hesselink, Wim H.
2007-01-01
The Euclidean distance transform of a binary image is the function that assigns to every pixel the Euclidean distance to the background. The Euclidean feature transform is the function that assigns to every pixel the set of background pixels with this distance. We present an algorithm to compute the
High dimensional neurocomputing growth, appraisal and applications
Tripathi, Bipin Kumar
2015-01-01
The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...
Asymptotically Honest Confidence Regions for High Dimensional
DEFF Research Database (Denmark)
Caner, Mehmet; Kock, Anders Bredahl
While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...
Euclidean wormholes with minimally coupled scalar fields
International Nuclear Information System (INIS)
Ruz, Soumendranath; Modak, Bijan; Debnath, Subhra; Sanyal, Abhik Kumar
2013-01-01
A detailed study of quantum and semiclassical Euclidean wormholes for Einstein's theory with a minimally coupled scalar field has been performed for a class of potentials. Massless, constant, massive (quadratic in the scalar field) and inverse (linear) potentials admit the Hawking and Page wormhole boundary condition both in the classically forbidden and allowed regions. An inverse quartic potential has been found to exhibit a semiclassical wormhole configuration. Classical wormholes under a suitable back-reaction leading to a finite radius of the throat, where the strong energy condition is satisfied, have been found for the zero, constant, quadratic and exponential potentials. Treating such classical Euclidean wormholes as an initial condition, a late stage of cosmological evolution has been found to remain unaltered from standard Friedmann cosmology, except for the constant potential which under the back-reaction produces a term like a negative cosmological constant. (paper)
Euclidean fields: vector mesons and photons
International Nuclear Information System (INIS)
Loffelholz, J.
1979-01-01
Free transverse vector fields of mass >= 0 are studied. The model is related to the usual free vector meson and electromagnetic quantum field theories by extension of the field operators from transverse to arbitrary test functions. The one-particle states in transverse gauge and their localization are described. Reflexion positivity is proved and derived are free Feynman-Kac-Nelson formulas. An Euclidean approach to a photon field in a spherical world using dilatation covariance and inversions is given
The positive action conjecture and asymptotically euclidean metrics in quantum gravity
International Nuclear Information System (INIS)
Gibbons, G.W.; Pope, C.N.
1979-01-01
The positive action conjecture requires that the action of any asymptotically Euclidean 4-dimensional Riemannian metric be positive, vanishing if and only if the space is flat. Because any Ricci flat, asymptotically Euclidean metric has zero action and is local extremum of the action which is a local minimum at flat space, the conjecture requires that there are no Ricci flat asymptotically Euclidean metrics other than flat space, which would establish that flat space is the only local minimum. We prove this for metrics on R 4 and a large class of more complicated topologies and for self-dual metrics. We show that if Rsupμsubμ >= 0 there are no bound states of the Dirac equation and discuss the relevance to possible baryon non-conserving processes mediated by gravitational instantons. We conclude that these are forbidden in the lowest stationary phase approximation. We give a detailed discussion of instantons invariant under an SU(2) or SO(3) isometry group. We find all regular solutions, none of which is asymptotically Euclidean and all of which possess a further Killing vector. In an appendix we construct an approximate self-dual metric on K3 - the only simply connected compact manifold which admits a self-dual metric. (orig.) [de
High Dimensional Classification Using Features Annealed Independence Rules.
Fan, Jianqing; Fan, Yingying
2008-01-01
Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.
Hubble expansion in a Euclidean framework
International Nuclear Information System (INIS)
Alfven, H.
1979-01-01
There now seems to be strong evidence for a non-cosmological interpretation of the QSO redshift - in any case, so strong that it is of interest to investigate the consequences. The purpose of this paper is to construct a model of the Hubble expansion which is as far as possible from the conventional Big Bang model without coming in conflict with any well-established observational results (while introducing no new laws of physics). This leads to an essentially Euclidean metagalactic model (see Table I) with very little mass outside one-third or half of the Hubble radius. The total kinetic energy of the Hubble expansion need only to be about 5% of the rest mass energy. Present observations support backwards in time extrapolation of the Hubble expansion to a 'minimum size galaxy' Rsub(m), which may have any value in 0 26 cm. Other arguments speak in favor of a size close to the upper value, say Rsub(m) = 10 26 cm (Table II). As this size is probably about 100 times the Schwarzschild limit, an essentially Euclidean description is allowed. The kinetic energy of the Hubble expansion may derive from an intense QSO-like activity in the minimum size metagalaxy, with an energy release corresponding to the annihilation of a few solar masses per galaxy per year. Some of the conclusions based on the Big Bang hypothesis are criticized and in several cases alternative interpretations are suggested. A comparison between the Euclidean and the conventional models is given in Table III. (orig.)
Exploring Concepts of Geometry not Euclidean
Directory of Open Access Journals (Sweden)
Luiz Ambrozi
2016-02-01
Full Text Available With this article we intend to propose different situations of teaching and learning, how they can be applied in schools, mediated by the use of concrete materials and Geogebra software, emphasizing the use of technology in the classroom, that this proposal has the role of assisting in the conceptualization and identification of elements of non-Euclidean geometry. In addition, this short course is designed to be a time of current and continuing education for teachers, with activities to be developed with dynamic geometry and based on the theory of Conceptual Fields of Vergnaud.
Euclidean approach to the inflationary universe
International Nuclear Information System (INIS)
Hawking, S.W.
1983-01-01
The aim of this article is to show how the Euclidean approach can be used to study the inflationary universe. Although this formulation may appear counterintuitive in some respects, it has the advantage that it defines a definite quantum state and provides a framework for calculating quantities of interest such as correlation functions or tunnelling probabilities. By contrast, in the more usual approach in real Lorentzian spacetime, it is not so clear what the quantum state should be or how to evaluate such quantities. (author)
Introduction to non-Euclidean geometry
Wolfe, Harold E
2012-01-01
One of the first college-level texts for elementary courses in non-Euclidean geometry, this concise, readable volume is geared toward students familiar with calculus. A full treatment of the historical background explores the centuries-long efforts to prove Euclid's parallel postulate and their triumphant conclusion. Numerous original exercises form an integral part of the book.Topics include hyperbolic plane geometry and hyperbolic plane trigonometry, applications of calculus to the solutions of some problems in hyperbolic geometry, elliptic plane geometry and trigonometry, and the consistenc
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas
2009-01-01
This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...
SLE as a Mating of Trees in Euclidean Geometry
Holden, Nina; Sun, Xin
2018-05-01
The mating of trees approach to Schramm-Loewner evolution (SLE) in the random geometry of Liouville quantum gravity (LQG) has been recently developed by Duplantier et al. (Liouville quantum gravity as a mating of trees, 2014. arXiv:1409.7055). In this paper we consider the mating of trees approach to SLE in Euclidean geometry. Let {η} be a whole-plane space-filling SLE with parameter {κ > 4} , parameterized by Lebesgue measure. The main observable in the mating of trees approach is the contour function, a two-dimensional continuous process describing the evolution of the Minkowski content of the left and right frontier of {η} . We prove regularity properties of the contour function and show that (as in the LQG case) it encodes all the information about the curve {η} . We also prove that the uniform spanning tree on {Z^2} converges to SLE8 in the natural topology associated with the mating of trees approach.
Buckling transition and boundary layer in non-Euclidean plates.
Efrati, Efi; Sharon, Eran; Kupferman, Raz
2009-07-01
Non-Euclidean plates are thin elastic bodies having no stress-free configuration, hence exhibiting residual stresses in the absence of external constraints. These bodies are endowed with a three-dimensional reference metric, which may not necessarily be immersible in physical space. Here, based on a recently developed theory for such bodies, we characterize the transition from flat to buckled equilibrium configurations at a critical value of the plate thickness. Depending on the reference metric, the buckling transition may be either continuous or discontinuous. In the infinitely thin plate limit, under the assumption that a limiting configuration exists, we show that the limit is a configuration that minimizes the bending content, among all configurations with zero stretching content (isometric immersions of the midsurface). For small but finite plate thickness, we show the formation of a boundary layer, whose size scales with the square root of the plate thickness and whose shape is determined by a balance between stretching and bending energies.
Statistical mechanics, gravity, and Euclidean theory
International Nuclear Information System (INIS)
Fursaev, Dmitri V.
2002-01-01
A review of computations of free energy for Gibbs states on stationary but not static gravitational and gauge backgrounds is given. On these backgrounds wave equations for free fields are reduced to eigenvalue problems which depend non-linearly on the spectral parameter. We present a method to deal with such problems. In particular, we demonstrate how some results of the spectral theory of second-order elliptic operators, such as heat kernel asymptotics, can be extended to a class of non-linear spectral problems. The method is used to trace down the relation between the canonical definition of the free energy based on summation over the modes and the covariant definition given in Euclidean quantum gravity. As an application, high-temperature asymptotics of the free energy and of the thermal part of the stress-energy tensor in the presence of rotation are derived. We also discuss statistical mechanics in the presence of Killing horizons where canonical and Euclidean theories are related in a non-trivial way
Introduction to high-dimensional statistics
Giraud, Christophe
2015-01-01
Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha
Estimating High-Dimensional Time Series Models
DEFF Research Database (Denmark)
Medeiros, Marcelo C.; Mendes, Eduardo F.
We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...
High dimensional classifiers in the imbalanced case
DEFF Research Database (Denmark)
Bak, Britta Anker; Jensen, Jens Ledet
We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...
Topology of high-dimensional manifolds
Energy Technology Data Exchange (ETDEWEB)
Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)
2002-08-15
The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.
Elucidating high-dimensional cancer hallmark annotation via enriched ontology.
Yan, Shankai; Wong, Ka-Chun
2017-09-01
Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.
Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.
Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko
2017-12-01
Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.
Euclidean supergravity and multi-centered solutions
Directory of Open Access Journals (Sweden)
W.A. Sabra
2017-04-01
Full Text Available In ungauged supergravity theories, the no-force condition for BPS states implies the existence of stable static multi-centered solutions. The first solutions to Einstein–Maxwell theory with a positive cosmological constant describing an arbitrary number of charged black holes were found by Kastor and Traschen. Generalisations to five and higher dimensional theories were obtained by London. Multi-centered solutions in gauged supergravity, even with time-dependence allowed, have yet to be constructed. In this letter we construct supersymmetry-preserving multi-centered solutions for the case of D=5, N=2 Euclidean gauged supergravity coupled to an arbitrary number of vector multiplets. Higher dimensional Einstein–Maxwell multi-centered solutions are also presented.
Euclidean Monte Carlo simulation of nuclear interactions
International Nuclear Information System (INIS)
Montvay, Istvan; Bonn Univ.; Urbach, Carsten
2011-05-01
We present an exploratory study of chiral effective theories of nuclei with methods adopted from lattice quantum chromodynamics (QCD). We show that the simulations in the Euclidean path integral approach are feasible and that we can determine the energy of the two nucleon state. By varying the parameters and the simulated volumes phase shifts can be determined in principle and hopefully tuned to their physical values in the future. The physical cut-off of the theory is realised by blocking of the lattice fields. By keeping this physical cut-off fixed in physical units the lattice cut-off can be changed and in this way the lattice artefacts can be eliminated. (orig.)
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
Energy Technology Data Exchange (ETDEWEB)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)
2015-01-15
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
International Nuclear Information System (INIS)
Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Euclidean null controllability of perturbed infinite delay systems with ...
African Journals Online (AJOL)
Euclidean null controllability of perturbed infinite delay systems with limited control. ... Open Access DOWNLOAD FULL TEXT ... The results are established by placing conditions on the perturbation function which guarantee that, if the linear control base system is completely Euclidean controllable, then the perturbed system ...
Euclidean null controllability of nonlinear infinite delay systems with ...
African Journals Online (AJOL)
Sufficient conditions for the Euclidean null controllability of non-linear delay systems with time varying multiple delays in the control and implicit derivative are derived. If the uncontrolled system is uniformly asymptotically stable and if the control system is controllable, then the non-linear infinite delay system is Euclidean null ...
Euclidean null controllability of linear systems with delays in state ...
African Journals Online (AJOL)
Sufficient conditions are developed for the Euclidean controllability of linear systems with delay in state and in control. Namely, if the uncontrolled system is uniformly asymptotically stable and the control equation proper, then the control system is Euclidean null controllable. Journal of the Nigerian Association of ...
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
A. Porter and my advisor. The text is primarily written by me. Chapter 5 is a version of [46] where my contribution is all of the analytical ...inn Euclidean space, a variational method refers to using calculus of variation techniques to find the minimizer (or maximizer) of a functional (energy... geometric inter- pretation of modularity optimization contrasts with existing interpretations (e.g., probabilistic ones or in terms of the Potts model
Clustering high dimensional data using RIA
Energy Technology Data Exchange (ETDEWEB)
Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)
2015-05-15
Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.
Walwyn, Amy L.; Navarro, Daniel J.
2010-01-01
An experiment is reported comparing human performance on two kinds of visually presented traveling salesperson problems (TSPs), those reliant on Euclidean geometry and those reliant on city block geometry. Across multiple array sizes, human performance was near-optimal in both geometries, but was slightly better in the Euclidean format. Even so,…
Statistical 2D and 3D shape analysis using Non-Euclidean Metrics
DEFF Research Database (Denmark)
Larsen, Rasmus; Hilger, Klaus Baggesen; Wrobel, Mark Christoph
2002-01-01
We address the problem of extracting meaningful, uncorrelated biological modes of variation from tangent space shape coordinates in 2D and 3D using non-Euclidean metrics. We adapt the maximum autocorrelation factor analysis and the minimum noise fraction transform to shape decomposition. Furtherm......We address the problem of extracting meaningful, uncorrelated biological modes of variation from tangent space shape coordinates in 2D and 3D using non-Euclidean metrics. We adapt the maximum autocorrelation factor analysis and the minimum noise fraction transform to shape decomposition....... Furthermore, we study metrics based on repated annotations of a training set. We define a way of assessing the correlation between landmarks contrary to landmark coordinates. Finally, we apply the proposed methods to a 2D data set consisting of outlines of lungs and a 3D/(4D) data set consisting of sets...
The Euclidean three-point function in loop and perturbative gravity
International Nuclear Information System (INIS)
Rovelli, Carlo; Zhang Mingyi
2011-01-01
We compute the leading order of the three-point function in loop quantum gravity, using the vertex expansion of the Euclidean version of the new spin foam dynamics, in the region of γ < 1. We find results consistent with Regge calculus in the limit γ → 0, j → ∞. We also compute the tree-level three-point function of perturbative quantum general relativity in position space and discuss the possibility of directly comparing the two results.
Non-Euclidean Geometry, Nontrivial Topology and Quantum Vacuum Effects
Directory of Open Access Journals (Sweden)
Yurii A. Sitenko
2018-01-01
Full Text Available Space out of a topological defect of the Abrikosov–Nielsen–Olesen (ANO vortex type is locally flat but non-Euclidean. If a spinor field is quantized in such a space, then a variety of quantum effects are induced in the vacuum. On the basis of the continuum model for long-wavelength electronic excitations originating in the tight-binding approximation for the nearest-neighbor interaction of atoms in the crystal lattice, we consider quantum ground-state effects in Dirac materials with two-dimensional monolayer structures warped into nanocones by a disclination; the nonzero size of the disclination is taken into account, and a boundary condition at the edge of the disclination is chosen to ensure self-adjointness of the Dirac–Weyl Hamiltonian operator. We show that the quantum ground-state effects are independent of the disclination size, and we find circumstances in which they are independent of parameters of the boundary condition.
Evaluating Clustering in Subspace Projections of High Dimensional Data
DEFF Research Database (Denmark)
Müller, Emmanuel; Günnemann, Stephan; Assent, Ira
2009-01-01
Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...
Quaternion analyticity and conformally Kaehlerian structure in Euclidean gravity
International Nuclear Information System (INIS)
Guersey, F.; Chia-Hsiung Tze
1984-01-01
Starting from the fact that the d = 4 Euclidean flat spacetime is conformally related to the Kaehler manifold H 2 xS 2 , we show the Euclidean Schwarzschild metric to be conformally related to another Kaehler manifold M 2 xS 2 with M 2 being conformal to H 2 in two dimensions. Both metrics which are conformally Kaehlerian, are form-invariant under the infinite parameter Fueter group, the Euclidean counterpart of Milne's group of clock regraduation. The associated Einstein's equations translate into Fueter's quaternionic analyticity. The latter leads to an infinite number of local continuity equations. (orig.)
International Nuclear Information System (INIS)
Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang
2013-01-01
Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)
Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.
Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela
2016-12-01
Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.
Modeling High-Dimensional Multichannel Brain Signals
Hu, Lechuan
2017-12-12
Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.
A qualitative numerical study of high dimensional dynamical systems
Albers, David James
Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
New solutions of euclidean SU(2) gauge theory
International Nuclear Information System (INIS)
Khan, I.
1983-08-01
New solutions of the Euclidean SU(2) gauge theory having finite field strength everywhere are presented. The solutions are self dual or antidual and constitute a two-parameter family which includes the instantons. (author)
PERBANDINGAN EUCLIDEAN DISTANCE DENGAN CANBERRA DISTANCE PADA FACE RECOGNITION
Directory of Open Access Journals (Sweden)
Sendhy Rachmat Wurdianarto
2014-08-01
Full Text Available Perkembangan ilmu pada dunia komputer sangatlah pesat. Salah satu yang menandai hal ini adalah ilmu komputer telah merambah pada dunia biometrik. Arti biometrik sendiri adalah karakter-karakter manusia yang dapat digunakan untuk membedakan antara orang yang satu dengan yang lainnya. Salah satu pemanfaatan karakter / organ tubuh pada setiap manusia yang digunakan untuk identifikasi (pengenalan adalah dengan memanfaatkan wajah. Dari permasalahan diatas dalam pengenalan lebih tentang aplikasi Matlab pada Face Recognation menggunakan metode Euclidean Distance dan Canberra Distance. Model pengembangan aplikasi yang digunakan adalah model waterfall. Model waterfall beriisi rangkaian aktivitas proses yang disajikan dalam proses analisa kebutuhan, desain menggunakan UML (Unified Modeling Language, inputan objek gambar diproses menggunakan Euclidean Distance dan Canberra Distance. Kesimpulan yang dapat ditarik adalah aplikasi face Recognation menggunakan metode euclidean Distance dan Canverra Distance terdapat kelebihan dan kekurangan masing-masing. Untuk kedepannya aplikasi tersebut dapat dikembangkan dengan menggunakan objek berupa video ataupun objek lainnya. Kata kunci : Euclidean Distance, Face Recognition, Biometrik, Canberra Distance
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
Singular Minkowski and Euclidean solutions for SU(2) Yang-Mills theory
International Nuclear Information System (INIS)
Singleton, D.
1996-01-01
In this paper it is examined a solution to the SU(2) Yang-Mills-Higgs system, which is a trivial mathematical extension of recently discovered Schwarzschild- like solutions (Singleton D., Phys. Rev. D, 51 (1955) 5911). Physically, however, this new solution has drastically different properties from the Schwarzschild-like solutions. It is also studied a new classical solution for Euclidean SU(2) Yang-Mills theory. Again this new solution is a mathematically trivial extension of the Belavin-Polyakov-Schwartz-Tyupkin (BPST) (Belavin A. A. et al., Phys. Lett. B, 59 (1975) 85) instanton, but is physically very different. Unlike the usual instanton solution, the present solution is singular on a sphere of arbitrary radius in Euclidean space. Both of these solutions are infinite-energy solutions, so their practical value is somewhat unclear. However, they may be useful in exploring some of the mathematical aspects of classical Yang-Mills theory
The non-Euclidean revolution with an introduction by H.S.M. Coxeter
Trudeau, Richard J
2001-01-01
How unique and definitive is Euclidean geometry in describing the "real" space in which we live? Richard Trudeau confronts the fundamental question of truth and its representation through mathematical models in The Non-Euclidean Revolution. First, the author analyzes geometry in its historical and philosophical setting; second, he examines a revolution every bit as significant as the Copernican revolution in astronomy and the Darwinian revolution in biology; third, on the most speculative level, he questions the possibility of absolute knowledge of the world. Trudeau writes in a lively, entertaining, and highly accessible style. His book provides one of the most stimulating and personal presentations of a struggle with the nature of truth in mathematics and the physical world. A portion of the book won the Pólya Prize, a distinguished award from the Mathematical Association of America.
Modelling non-Euclidean movement and landscape connectivity in highly structured ecological networks
Sutherland, Christopher; Fuller, Angela K.; Royle, J. Andrew
2015-01-01
Movement is influenced by landscape structure, configuration and geometry, but measuring distance as perceived by animals poses technical and logistical challenges. Instead, movement is typically measured using Euclidean distance, irrespective of location or landscape structure, or is based on arbitrary cost surfaces. A recently proposed extension of spatial capture-recapture (SCR) models resolves this issue using spatial encounter histories of individuals to calculate least-cost paths (ecological distance: Ecology, 94, 2013, 287) thereby relaxing the Euclidean assumption. We evaluate the consequences of not accounting for movement heterogeneity when estimating abundance in highly structured landscapes, and demonstrate the value of this approach for estimating biologically realistic space-use patterns and landscape connectivity.
Euclidean supersymmetric solutions with the self-dual Weyl tensor
Directory of Open Access Journals (Sweden)
Masato Nozawa
2017-07-01
Full Text Available We explore the Euclidean supersymmetric solutions admitting the self-dual gauge field in the framework of N=2 minimal gauged supergravity in four dimensions. According to the classification scheme utilizing the spinorial geometry or the bilinears of Killing spinors, the general solution preserves one quarter of supersymmetry and is described by the Przanowski–Tod class with the self-dual Weyl tensor. We demonstrate that there exists an additional Killing spinor, provided the Przanowski–Tod metric admits a Killing vector that commutes with the principal one. The proof proceeds by recasting the metric into another Przanowski–Tod form. This formalism enables us to show that the self-dual Reissner–Nordström–Taub–NUT–AdS metric possesses a second Killing spinor, which has been missed over many years. We also address the supersymmetry when the Przanowski–Tod space is conformal to each of the self-dual ambi-toric Kähler metrics. It turns out that three classes of solutions are all reduced to the self-dual Carter family, by virtue of the nondegenerate Killing–Yano tensor.
High-Dimensional Quantum Information Processing with Linear Optics
Fitzpatrick, Casey A.
Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for
Efficient gamma index calculation using fast Euclidean distance transform
Energy Technology Data Exchange (ETDEWEB)
Chen Mingli; Lu Weiguo; Chen Quan; Ruchala, Kenneth; Olivera, Gustavo [TomoTherapy Inc., 1240 Deming Way, Madison, WI 53717 (United States)], E-mail: wlu@tomotherapy.com
2009-04-07
The gamma index is a tool for dose distribution comparison. It combines both dose difference (DD) and distance to agreement (DTA) into a single quantity. Though it is an effective measure, making up for the inadequacy of DD or DTA alone, its calculation can be very time-consuming. For a k-D space with N quantization levels in each dimension, the complexity of the exhaustive search is O(N{sup 2k}). In this work, we proposed an efficient method that reduces the complexity from O(N{sup 2k}) to O(N{sup k}M), where M is the number of discretized dose values and is comparable to N. More precisely, by embedding the reference dose distribution in a (k+1)-D spatial-dose space, we can use fast Euclidean distance transform with linear complexity to obtain a table of gamma indices evaluated over a range of the (k+1)-D spatial-dose space. Then, to obtain gamma indices for the test dose distribution, it requires only table lookup with complexity O(N{sup k}). Such a table can also be used for other test dose distributions as long as the reference dose distribution is the same. Simulations demonstrated the efficiency of our proposed method. The speedup for 3D gamma index calculation is expected to be on the order of tens of thousands (from O(N{sup 6}) to O(N{sup 3}M)) if N is a few hundreds, which makes clinical usage of the 3D gamma index feasible. A byproduct of the gamma index table is that the gradient of the gamma index with respect to either the spatial or dose dimension can be easily derived. The gradient can be used to identify the main causes of the discrepancy from the reference distribution at any dose point in the test distribution or incorporated in treatment planning and machine parameter optimization.
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
International Nuclear Information System (INIS)
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao
2017-01-01
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.
Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.
2010-01-01
Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
Energy Technology Data Exchange (ETDEWEB)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Quality and efficiency in high dimensional Nearest neighbor search
Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos
2009-01-01
Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.
Some notes on tetrahedrally closed spherical sets in Euclidean spaces
Indian Academy of Sciences (India)
47
is a relation between these sets. P is called the point set, L the line set and I the incidence relation. A point-line geometry S = (P,L,I) is called a near polygon if every two distinct points are incident with at most one line and if for every point x and every line L, there exists a unique point on L that is nearest to x with respect to ...
The quark Schwinger-Dyson equation in temporal Euclidean space
Czech Academy of Sciences Publication Activity Database
Šauli, Vladimír; Batiz, Z.
2009-01-01
Roč. 36, č. 3 (2009), 035002/1-035002/13 ISSN 0954-3899 Institutional research plan: CEZ:AV0Z10480505 Keywords : ANALYTIC PERTURBATION-THEORY * DYNAMICAL SYMMETRY-BREAKING * BACKGROUND FIELD METHOD Subject RIV: BE - Theoretical Physics Impact factor: 2.124, year: 2009
Mannheim Partner D-Curves in the Euclidean 3-space
Directory of Open Access Journals (Sweden)
Mustafa Kazaz
2015-02-01
Full Text Available In this paper, we consider the idea of Mannheim partner curves for curves lying on surfaces. By considering the Darboux frames of surface curves, we define Mannheim partner D-curves and give the characterizations for these curves. We also find the relations between geodesic curvatures, normal curvatures and geodesic torsions of these associated curves. Furthermore, we show that definition and characterizations of Mannheim partner D-curves include those of Mannheim partner curves in some special cases.
Model-based Clustering of High-Dimensional Data in Astrophysics
Bouveyron, C.
2016-05-01
The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data
Directory of Open Access Journals (Sweden)
Hongchao Song
2017-01-01
Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.
Weyl, Hermann
1922-01-01
Excellent introduction probes deeply into Euclidean space, Riemann's space, Einstein's general relativity, gravitational waves and energy, and laws of conservation. "A classic of physics." - British Journal for Philosophy and Science.
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the
High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.
Andras, Peter
2018-02-01
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.
Metrics for measuring distances in configuration spaces
International Nuclear Information System (INIS)
Sadeghi, Ali; Ghasemi, S. Alireza; Schaefer, Bastian; Mohr, Stephan; Goedecker, Stefan; Lill, Markus A.
2013-01-01
In order to characterize molecular structures we introduce configurational fingerprint vectors which are counterparts of quantities used experimentally to identify structures. The Euclidean distance between the configurational fingerprint vectors satisfies the properties of a metric and can therefore safely be used to measure dissimilarities between configurations in the high dimensional configuration space. In particular we show that these metrics are a perfect and computationally cheap replacement for the root-mean-square distance (RMSD) when one has to decide whether two noise contaminated configurations are identical or not. We introduce a Monte Carlo approach to obtain the global minimum of the RMSD between configurations, which is obtained from a global minimization over all translations, rotations, and permutations of atomic indices
What if? Exploring the multiverse through Euclidean wormholes
Bouhmadi-López, Mariam; Krämer, Manuel; Morais, João; Robles-Pérez, Salvador
2017-10-01
We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era.
What if? Exploring the multiverse through Euclidean wormholes
International Nuclear Information System (INIS)
Bouhmadi-Lopez, Mariam; Kraemer, Manuel; Morais, Joao; Robles-Perez, Salvador
2017-01-01
We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era. (orig.)
What if? Exploring the multiverse through Euclidean wormholes
Energy Technology Data Exchange (ETDEWEB)
Bouhmadi-Lopez, Mariam [University of the Basque Country UPV/EHU, Department of Theoretical Physics, Bilbao (Spain); Ikerbasque, Basque Foundation for Science, Bilbao (Spain); Kraemer, Manuel [University of Szczecin, Institute of Physics, Szczecin (Poland); Morais, Joao [University of the Basque Country UPV/EHU, Department of Theoretical Physics, Bilbao (Spain); Robles-Perez, Salvador [Instituto de Fisica Fundamental, CSIC, Madrid (Spain); Estacion Ecologica de Biocosmologia, Medellin (Spain)
2017-10-15
We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era. (orig.)
Multivariate statistics high-dimensional and large-sample approximations
Fujikoshi, Yasunori; Shimizu, Ryoichi
2010-01-01
A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-01
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Supersymmetry on a euclidean spacetime lattice 1. A target theory with four supercharges
International Nuclear Information System (INIS)
Cohen, Andrew G.; Kaplan, David B.; Katz, Emanuel; Uensal, Mithat
2003-01-01
We formulate a euclidean spacetime lattice whose continuum limit is (2,2) supersymmetric Yang-Mills theory in two dimensions, a theory which possesses four supercharges and an anomalous global chiral symmetry. The lattice action respects one exact supersymmetry, which allows the target theory to emerge in the continuum limit without fine-tuning. Our method exploits an orbifold construction described previously for spatial lattices in Minkowski space, and can be generalized to more complicated theories with additional supersymmetry and more spacetime dimensions. (author)
Constant curvature black holes in Einstein AdS gravity: Euclidean action and thermodynamics
Guilleminot, Pablo; Olea, Rodrigo; Petrov, Alexander N.
2018-03-01
We compute the Euclidean action for constant curvature black holes (CCBHs), as an attempt to associate thermodynamic quantities to these solutions of Einstein anti-de Sitter (AdS) gravity. CCBHs are gravitational configurations obtained by identifications along isometries of a D -dimensional globally AdS space, such that the Riemann tensor remains constant. Here, these solutions are interpreted as extended objects, which contain a (D -2 )-dimensional de-Sitter brane as a subspace. Nevertheless, the computation of the free energy for these solutions shows that they do not obey standard thermodynamic relations.
Ultraviolet stability in euclidean scalar field theories
Energy Technology Data Exchange (ETDEWEB)
Benfatto, G; Cassandro, M; Gallavotti, G; Nicolo, F; Olivieri, E; Presutti, E; Scacciatelli, E [Rome Univ. (Italy). Istituto di Matematica; Rome Univ. (Italy). Istituto di Fisica)
1980-01-01
We develop a technique for reducing the problem of the ultraviolet divergences and their removal to a free field problem. This work is an example of a problem to which a rather general method can be applied. It can be thought as an attempt towards a rigorous version (in 2 or 3 space-time dimensions) of the analysis of the structure of the functional integrals, the underlying mechanism being essentially the same as in Glimms approach.
Euclidean self-dual Yang-Mills field configurations
International Nuclear Information System (INIS)
Sartori, G.
1980-01-01
The determination of a large class of regular and singular Euclidean self-dual Yang-Mills field configurations is reduced to the solution of a set of linear algebraic equations. The matrix of the coefficients is a polynomial functions of x and the rules for its construction are elementary. (author)
The toroidal Hausdorff dimension of 2d Euclidean quantum gravity
DEFF Research Database (Denmark)
Ambjorn, Jan; Budd, Timothy George
2013-01-01
The lengths of shortest non-contractible loops are studied numerically in 2d Euclidean quantum gravity on a torus coupled to conformal field theories with central charge less than one. We find that the distribution of these geodesic lengths displays a scaling in agreement with a Hausdorff dimension...
Timed Fast Exact Euclidean Distance (tFEED) maps
Kehtarnavaz, Nasser; Schouten, Theo E.; Laplante, Philip A.; Kuppens, Harco; van den Broek, Egon
2005-01-01
In image and video analysis, distance maps are frequently used. They provide the (Euclidean) distance (ED) of background pixels to the nearest object pixel. In a naive implementation, each object pixel feeds its (exact) ED to each background pixel; then the minimum of these values denotes the ED to
The Euclidean distance degree of an algebraic variety
Draisma, J.; Horobet, E.; Ottaviani, G.; Sturmfels, B.; Thomas, R.R.
2013-01-01
The nearest point map of a real algebraic variety with respect to Euclidean distance is an algebraic function. For instance, for varieties of low rank matrices, the Eckart-Young Theorem states that this map is given by the singular value decomposition. This article develops a theory of such nearest
The Euclidean distance degree of an algebraic variety
Draisma, J.; Horobet, E.; Ottaviani, G.; Sturmfels, B.; Thomas, R.R.
The nearest point map of a real algebraic variety with respect to Euclidean distance is an algebraic function. For instance, for varieties of low-rank matrices, the Eckart–Young Theorem states that this map is given by the singular value decomposition. This article develops a theory of such nearest
Superconvergent perturbation theory for euclidean scalar field theories
International Nuclear Information System (INIS)
Ushveridze, A.G.
1984-01-01
It is shown that the bare (unrenormalized) correlation functions in the euclidean scalar field theories can be expanded in a series whose terms, being computable in a relatively simple way, are free from ultraviolet and infrared divergencies. This series is convergent (divergent) for finite (infinite) values of the correlation functions. (orig.)
Improvement in quality testing of Braille printer output with Euclidean ...
African Journals Online (AJOL)
This paper focuses on quality testing of Braille printed paper using calibrated camera by detecting dots and measuring the Euclidean distances between them with equal gap, vertically and horizontally. For higher accuracy, camera calibration is essential to observe a planar checker board pattern from different distances and ...
The Role of Structure in Learning Non-Euclidean Geometry
Asmuth, Jennifer A.
2009-01-01
How do people learn novel mathematical information that contradicts prior knowledge? The focus of this thesis is the role of structure in the acquisition of knowledge about hyperbolic geometry, a non-Euclidean geometry. In a series of three experiments, I contrast a more holistic structure--training based on closed figures--with a mathematically…
Euclidean Primes Have the Minimum Number of Primitive Roots
Czech Academy of Sciences Publication Activity Database
Křížek, Michal; Somer, L.
2008-01-01
Roč. 12, č. 1 (2008), s. 121-127 ISSN 0972-5555 R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : Euclidean primes * Fermat primes * Sophie Germain primes Subject RIV: BA - General Mathematics
Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids
International Nuclear Information System (INIS)
Jakeman, John D.; Archibald, Richard; Xiu Dongbin
2011-01-01
In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.
Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?
Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W
2018-03-01
The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.
An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data
DEFF Research Database (Denmark)
Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira
2011-01-01
than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....
Harnessing high-dimensional hyperentanglement through a biphoton frequency comb
Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee
2015-08-01
Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.
Analysing spatially extended high-dimensional dynamics by recurrence plots
Energy Technology Data Exchange (ETDEWEB)
Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)
2015-05-08
Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.
Blasjo, Viktor|info:eu-repo/dai/nl/338038108
2013-01-01
We discuss how a creature accustomed to Euclidean space would fare in a world of hyperbolic or spherical geometry, and conversely. Various optical illusions and counterintuitive experiences arise, which can be explicated mathematically using plane models of these geometries.
Arif, Muhammad
2012-06-01
In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.
High-dimensional model estimation and model selection
CERN. Geneva
2015-01-01
I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.
Optimal recovery of linear operators in non-Euclidean metrics
Energy Technology Data Exchange (ETDEWEB)
Osipenko, K Yu [Moscow State Aviation Technological University, Moscow (Russian Federation)
2014-10-31
The paper looks at problems concerning the recovery of operators from noisy information in non-Euclidean metrics. A number of general theorems are proved and applied to recovery problems for functions and their derivatives from the noisy Fourier transform. In some cases, a family of optimal methods is found, from which the methods requiring the least amount of original information are singled out. Bibliography: 25 titles.
Change of Measure between Light Travel Time and Euclidean Distances
Directory of Open Access Journals (Sweden)
Heymann Y.
2013-04-01
Full Text Available The problem of cosmological distances is approached using a method based on the propagation of light in an expanding Universe. From the chan ge of measure between Light Travel Time and Euclidean Distances, a formula is deri ved to compute distances as a function of redshift. This formula is identical to Matti g’s formula (with q 0 = 1 / 2 which is based on Friedmann’s equations of general relativi ty.
Jégat , Alain
2014-01-01
The usual framework for Einstein’s special theory of relativity is the pseudo-euclidean spacetime proposed by Hermann Minkowski. This article aims at proposing a different model.The framework is an euclidean four-dimensional space in which all the objects move regularly (it means that, between two observations, whatever their trajectories, they cover the same distance), but where the events are seen in projection according to a privileged direction, as we are going to explain. The remark, rat...
Reinforcement learning on slow features of high-dimensional input streams.
Directory of Open Access Journals (Sweden)
Robert Legenstein
Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.
Constraint algebra in Smolin's G →0 limit of 4D Euclidean gravity
Varadarajan, Madhavan
2018-05-01
Smolin's generally covariant GNewton→0 limit of 4d Euclidean gravity is a useful toy model for the study of the constraint algebra in loop quantum gravity (LQG). In particular, the commutator between its Hamiltonian constraints has a metric dependent structure function. While a prior LQG-like construction of nontrivial anomaly free constraint commutators for the model exists, that work suffers from two defects. First, Smolin's remarks on the inability of the quantum dynamics to generate propagation effects apply. Second, the construction only yields the action of a single Hamiltonian constraint together with the action of its commutator through a continuum limit of corresponding discrete approximants; the continuum limit of a product of two or more constraints does not exist. Here, we incorporate changes in the quantum dynamics through structural modifications in the choice of discrete approximants to the quantum Hamiltonian constraint. The new structure is motivated by that responsible for propagation in an LQG-like quantization of paramatrized field theory and significantly alters the space of physical states. We study the off shell constraint algebra of the model in the context of these structural changes and show that the continuum limit action of multiple products of Hamiltonian constraints is (a) supported on an appropriate domain of states, (b) yields anomaly free commutators between pairs of Hamiltonian constraints, and (c) is diffeomorphism covariant. Many of our considerations seem robust enough to be applied to the setting of 4d Euclidean gravity.
Directory of Open Access Journals (Sweden)
Atamurat Kuchkarov
2016-01-01
Full Text Available We consider pursuit and evasion differential games of a group of m pursuers and one evader on manifolds with Euclidean metric. The motions of all players are simple, and maximal speeds of all players are equal. If the state of a pursuer coincides with that of the evader at some time, we say that pursuit is completed. We establish that each of the differential games (pursuit or evasion is equivalent to a differential game of m groups of countably many pursuers and one group of countably many evaders in Euclidean space. All the players in any of these groups are controlled by one controlled parameter. We find a condition under which pursuit can be completed, and if this condition is not satisfied, then evasion is possible. We construct strategies for the pursuers in pursuit game which ensure completion the game for a finite time and give a formula for this time. In the case of evasion game, we construct a strategy for the evader.
Supporting Dynamic Quantization for High-Dimensional Data Analytics.
Guzun, Gheorghi; Canahuate, Guadalupe
2017-05-01
Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.
A hybridized K-means clustering approach for high dimensional ...
African Journals Online (AJOL)
International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.
On Robust Information Extraction from High-Dimensional Data
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2014-01-01
Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science
Inference in High-dimensional Dynamic Panel Data Models
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Tang, Haihan
We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...
Pricing High-Dimensional American Options Using Local Consistency Conditions
Berridge, S.J.; Schumacher, J.M.
2004-01-01
We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and
Irregular grid methods for pricing high-dimensional American options
Berridge, S.J.
2004-01-01
This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of
International Nuclear Information System (INIS)
Guerrieri, A.
2009-01-01
In this report the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model of intermediate complexity has been estimated numerically. A sensitivity analysis has been carried out by varying the equator-to-pole temperature difference, the space resolution and the value of some parameters employed by the model. Chaotic and non-chaotic regimes of circulation have been found. [it
Multi-stability in folded shells: non-Euclidean origami
Evans, Arthur
2015-03-01
Both natural and man-made structures benefit from having multiple mechanically stable states, from the quick snapping motion of hummingbird beaks to micro-textured surfaces with tunable roughness. Rather than discuss special fabrication techniques for creating bi-stability through material anisotropy, in this talk I will present several examples of how folding a structure can modify the energy landscape and thus lead to multiple stable states. Using ideas from origami and differential geometry, I will discuss how deforming a non-Euclidean surface can be done either continuously or discontinuously, and explore the effects that global constraints have on the ultimate stability of the surface.
ILUCG algorithm which minimizes in the Euclidean norm
International Nuclear Information System (INIS)
Petravic, M.; Kuo-Petravic, G.
1978-07-01
An algroithm is presented which solves sparse systems of linear equations of the form Ax = Y, where A is non-symmetric, by the Incomplete LU Decomposition-Conjugate Gradient (ILUCG) method. The algorithm minimizes the error in the Euclidean norm vertical bar x/sub i/ - x vertical bar 2 , where x/sub i/ is the solution vector after the i/sup th/ iteration and x the exact solution vector. The results of a test on one real problem indicate that the algorithm is likely to be competitive with the best existing algorithms of its type
High Dimensional Modulation and MIMO Techniques for Access Networks
DEFF Research Database (Denmark)
Binti Othman, Maisara
Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...
HSM: Heterogeneous Subspace Mining in High Dimensional Data
DEFF Research Database (Denmark)
Müller, Emmanuel; Assent, Ira; Seidl, Thomas
2009-01-01
Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...
Analysis of chaos in high-dimensional wind power system.
Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping
2018-01-01
A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
High-dimensional data in economics and their (robust) analysis
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf
High-dimensional Data in Economics and their (Robust) Analysis
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability
Quantifying high dimensional entanglement with two mutually unbiased bases
Directory of Open Access Journals (Sweden)
Paul Erker
2017-07-01
Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.
High dimensional model representation method for fuzzy structural dynamics
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
High-dimensional quantum cloning and applications to quantum hacking.
Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim
2017-02-01
Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.
Non-perturbative BRST quantization of Euclidean Yang-Mills theories in Curci-Ferrari gauges
Pereira, A. D.; Sobreiro, R. F.; Sorella, S. P.
2016-10-01
In this paper we address the issue of the non-perturbative quantization of Euclidean Yang-Mills theories in the Curci-Ferrari gauge. In particular, we construct a refined Gribov-Zwanziger action for this gauge, which takes into account the presence of gauge copies as well as the dynamical formation of dimension-two condensates. This action enjoys a non-perturbative BRST symmetry recently proposed in Capri et al. (Phys. Rev. D 92(4), 045039. doi: 10.1103/PhysRevD.92.045039 arXiv:1506.06995 [hep-th], 2015). Finally, we pay attention to the gluon propagator in different space-time dimensions.
Non-perturbative BRST quantization of Euclidean Yang-Mills theories in Curci-Ferrari gauges
International Nuclear Information System (INIS)
Pereira, A.D.; Sobreiro, R.F.; Sorella, S.P.
2016-01-01
In this paper we address the issue of the non-perturbative quantization of Euclidean Yang-Mills theories in the Curci-Ferrari gauge. In particular, we construct a refined Gribov-Zwanziger action for this gauge, which takes into account the presence of gauge copies as well as the dynamical formation of dimension-two condensates. This action enjoys a non-perturbative BRST symmetry recently proposed in Capri et al. (Phys. Rev. D 92(4), 045039. doi:10.1103/PhysRevD.92.045039. arXiv:1506.06995 [hepth], 2015). Finally, we pay attention to the gluon propagator in different space-time dimensions. (orig.)
Non-perturbative BRST quantization of Euclidean Yang-Mills theories in Curci-Ferrari gauges
Energy Technology Data Exchange (ETDEWEB)
Pereira, A.D. [UFF, Universidade Federal Fluminense, Instituto de Fisica, Campus da Praia Vermelha, Niteroi, RJ (Brazil); Max Planck Institute for Gravitational Physics, Albert Einstein Institute, Potsdam (Germany); UERJ, Universidade do Estado do Rio de Janeiro, Departamento de Fisica Teorica, Rio de Janeiro (Brazil); Sobreiro, R.F. [UFF, Universidade Federal Fluminense, Instituto de Fisica, Campus da Praia Vermelha, Niteroi, RJ (Brazil); Sorella, S.P. [UERJ, Universidade do Estado do Rio de Janeiro, Departamento de Fisica Teorica, Rio de Janeiro (Brazil)
2016-10-15
In this paper we address the issue of the non-perturbative quantization of Euclidean Yang-Mills theories in the Curci-Ferrari gauge. In particular, we construct a refined Gribov-Zwanziger action for this gauge, which takes into account the presence of gauge copies as well as the dynamical formation of dimension-two condensates. This action enjoys a non-perturbative BRST symmetry recently proposed in Capri et al. (Phys. Rev. D 92(4), 045039. doi:10.1103/PhysRevD.92.045039. arXiv:1506.06995 [hepth], 2015). Finally, we pay attention to the gluon propagator in different space-time dimensions. (orig.)
Euclidean scalar Green's functions near the black hole and black brane horizons
International Nuclear Information System (INIS)
Haba, Z
2009-01-01
We discuss approximations of the Riemannian geometry near the horizon. If a (D + 1)-dimensional manifold N has a bifurcate Killing horizon then we approximate N by a product of the two-dimensional Rindler space R 2 and a (D - 1)-dimensional Riemannian manifold M. We obtain approximate formulae for scalar Green's functions. We study the behavior of the Green's functions near the horizon and their dimensional reduction. We show that if M is compact then the Green's function near the horizon can be approximated by the Green's function of the two-dimensional quantum field theory. The correction term is exponentially small away from the horizon. We extend the results to black brane solutions of supergravity in 10 and 11 dimensions. The near-horizon geometry can be approximated by N=AdS p xS q . We discuss the Euclidean Green's functions on N and their behavior near the horizon.
Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes
Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong
2018-04-01
In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.
DEFF Research Database (Denmark)
Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld
2017-01-01
is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...
Inferring biological tasks using Pareto analysis of high-dimensional data.
Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri
2015-03-01
We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.
Texture classification using non-Euclidean Minkowski dilation
Florindo, Joao B.; Bruno, Odemir M.
2018-03-01
This study presents a new method to extract meaningful descriptors of gray-scale texture images using Minkowski morphological dilation based on the Lp metric. The proposed approach is motivated by the success previously achieved by Bouligand-Minkowski fractal descriptors on texture classification. In essence, such descriptors are directly derived from the morphological dilation of a three-dimensional representation of the gray-level pixels using the classical Euclidean metric. In this way, we generalize the dilation for different values of p in the Lp metric (Euclidean is a particular case when p = 2) and obtain the descriptors from the cumulated distribution of the distance transform computed over the texture image. The proposed method is compared to other state-of-the-art approaches (such as local binary patterns and textons for example) in the classification of two benchmark data sets (UIUC and Outex). The proposed descriptors outperformed all the other approaches in terms of rate of images correctly classified. The interesting results suggest the potential of these descriptors in this type of task, with a wide range of possible applications to real-world problems.
Speckle Suppression by Weighted Euclidean Distance Anisotropic Diffusion
Directory of Open Access Journals (Sweden)
Fengcheng Guo
2018-05-01
Full Text Available To better reduce image speckle noise while also maintaining edge information in synthetic aperture radar (SAR images, we propose a novel anisotropic diffusion algorithm using weighted Euclidean distance (WEDAD. Presented here is a modified speckle reducing anisotropic diffusion (SRAD method, which constructs a new edge detection operator using weighted Euclidean distances. The new edge detection operator can adaptively distinguish between homogenous and heterogeneous image regions, effectively generate anisotropic diffusion coefficients for each image pixel, and filter each pixel at different scales. Additionally, the effects of two different weighting methods (Gaussian weighting and non-linear weighting of de-noising were analyzed. The effect of different adjustment coefficient settings on speckle suppression was also explored. A series of experiments were conducted using an added noise image, GF-3 SAR image, and YG-29 SAR image. The experimental results demonstrate that the proposed method can not only significantly suppress speckle, thus improving the visual effects, but also better preserve the edge information of images.
Isometric immersions and embeddings of locally Euclidean metrics in R2
International Nuclear Information System (INIS)
Sabitov, I Kh
1999-01-01
This paper deals with the problem of isometric immersions and embeddings of two-dimensional locally Euclidean metrics in the Euclidean plane. We find explicit formulae for the immersions of metrics defined on a simply connected domain and a number of sufficient conditions for the existence of isometric embeddings. In the case when the domain is multiply connected we find necessary conditions for the existence of isometric immersions and classify the cases when the metric admits no isometric immersion in the Euclidean plane
The manifold model for space-time
International Nuclear Information System (INIS)
Heller, M.
1981-01-01
Physical processes happen on a space-time arena. It turns out that all contemporary macroscopic physical theories presuppose a common mathematical model for this arena, the so-called manifold model of space-time. The first part of study is an heuristic introduction to the concept of a smooth manifold, starting with the intuitively more clear concepts of a curve and a surface in the Euclidean space. In the second part the definitions of the Csub(infinity) manifold and of certain structures, which arise in a natural way from the manifold concept, are given. The role of the enveloping Euclidean space (i.e. of the Euclidean space appearing in the manifold definition) in these definitions is stressed. The Euclidean character of the enveloping space induces to the manifold local Euclidean (topological and differential) properties. A suggestion is made that replacing the enveloping Euclidean space by a discrete non-Euclidean space would be a correct way towards the quantization of space-time. (author)
Hawking radiation of a high-dimensional rotating black hole
Energy Technology Data Exchange (ETDEWEB)
Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)
2010-01-15
We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)
On spectral distribution of high dimensional covariation matrices
DEFF Research Database (Denmark)
Heinrich, Claudio; Podolskij, Mark
In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....
High-dimensional quantum channel estimation using classical light
CSIR Research Space (South Africa)
Mabena, Chemist M
2017-11-01
Full Text Available stream_source_info Mabena_20007_2017.pdf.txt stream_content_type text/plain stream_size 960 Content-Encoding UTF-8 stream_name Mabena_20007_2017.pdf.txt Content-Type text/plain; charset=UTF-8 PHYSICAL REVIEW A 96, 053860... (2017) High-dimensional quantum channel estimation using classical light Chemist M. Mabena CSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa and School of Physics, University of the Witwatersrand, Johannesburg 2000, South...
Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle
International Nuclear Information System (INIS)
Sardanyes, Josep
2009-01-01
Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold
Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems
Directory of Open Access Journals (Sweden)
DimitrisG. Stavrakoudis
2012-04-01
Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.
Anisotropic, Mixed-Norm Lizorkin-Triebel Spaces and Diffeomorphic Maps
DEFF Research Database (Denmark)
Johnsen, Jon; Hansen, Sabrina Munch; Sickel, Winfried
2014-01-01
This paper gives general results on invariance of anisotropic Lizorkin-Triebel spaces with mixed norms under coordinate transformations on Euclidean space, open sets, and cylindrical domains.......This paper gives general results on invariance of anisotropic Lizorkin-Triebel spaces with mixed norms under coordinate transformations on Euclidean space, open sets, and cylindrical domains....
Reduction of product platform complexity by vectorial Euclidean algorithm
International Nuclear Information System (INIS)
Navarrete, Israel Aguilera; Guzman, Alejandro A. Lozano
2013-01-01
In traditional machine, equipment and devices design, technical solutions are practically independent, thus increasing designs cost and complexity. Overcoming this situation has been tackled just using designer's experience. In this work, a product platform complexity reduction is presented based on a matrix representation of technical solutions versus product properties. This matrix represents the product platform. From this matrix, the Euclidean distances among technical solutions are obtained. Thus, the vectorial distances among technical solutions are identified in a new matrix of order of the number of technical solutions identified. This new matrix can be reorganized in groups with a hierarchical structure, in such a way that modular design of products is now more tractable. As a result of this procedure, the minimum vector distances are found thus being possible to identify the best technical solutions for the design problem raised. Application of these concepts is shown with two examples.
Curvature-driven morphing of non-Euclidean shells
Pezzulla, Matteo; Stoop, Norbert; Jiang, Xin; Holmes, D. P.
2017-05-01
We investigate how thin structures change their shape in response to non-mechanical stimuli that can be interpreted as variations in the structure's natural curvature. Starting from the theory of non-Euclidean plates and shells, we derive an effective model that reduces a three-dimensional stimulus to the natural fundamental forms of the mid-surface of the structure, incorporating expansion, or growth, in the thickness. Then, we apply the model to a variety of thin bodies, from flat plates to spherical shells, obtaining excellent agreement between theory and numerics. We show how cylinders and cones can either bend more or unroll, and eventually snap and rotate. We also study the nearly isometric deformations of a spherical shell and describe how this shape change is ruled by the geometry of a spindle. As the derived results stem from a purely geometrical model, they are general and scalable.
Defects and boundary layers in non-Euclidean plates
International Nuclear Information System (INIS)
Gemmer, J A; Venkataramani, S C
2012-01-01
We investigate the behaviour of non-Euclidean plates with constant negative Gaussian curvature using the Föppl–von Kármán reduced theory of elasticity. Motivated by recent experimental results, we focus on annuli with a periodic profile. We prove rigorous upper and lower bounds for the elastic energy that scales like the thickness squared. In particular we show that are only two types of global minimizers—deformations that remain flat and saddle shaped deformations with isolated regions of stretching near the edge of the annulus. We also show that there exist local minimizers with a periodic profile that have additional boundary layers near their lines of inflection. These additional boundary layers are a new phenomenon in thin elastic sheets and are necessary to regularize jump discontinuities in the azimuthal curvature across lines of inflection. We rigorously derive scaling laws for the width of these boundary layers as a function of the thickness of the sheet. (paper)
Geometry through history Euclidean, hyperbolic, and projective geometries
Dillon, Meighan I
2018-01-01
Presented as an engaging discourse, this textbook invites readers to delve into the historical origins and uses of geometry. The narrative traces the influence of Euclid’s system of geometry, as developed in his classic text The Elements, through the Arabic period, the modern era in the West, and up to twentieth century mathematics. Axioms and proof methods used by mathematicians from those periods are explored alongside the problems in Euclidean geometry that lead to their work. Students cultivate skills applicable to much of modern mathematics through sections that integrate concepts like projective and hyperbolic geometry with representative proof-based exercises. For its sophisticated account of ancient to modern geometries, this text assumes only a year of college mathematics as it builds towards its conclusion with algebraic curves and quaternions. Euclid’s work has affected geometry for thousands of years, so this text has something to offer to anyone who wants to broaden their appreciation for the...
Euclidean mirrors: enhanced vacuum decay from reflected instantons
Akal, Ibrahim; Moortgat-Pick, Gudrid
2018-05-01
We study the tunnelling of virtual matter–antimatter pairs from the quantum vacuum in the presence of a spatially uniform, time-dependent electric background composed of a strong, slow field superimposed with a weak, rapid field. After analytic continuation to Euclidean spacetime, we obtain from the instanton equations two critical points. While one of them is the closing point of the instanton path, the other serves as an Euclidean mirror which reflects and squeezes the instanton. It is this reflection and shrinking which is responsible for an enormous enhancement of the vacuum pair production rate. We discuss how important features of two different mechanisms can be analysed and understood via such a rotation in the complex plane. (a) Consistent with previous studies, we first discuss the standard assisted mechanism with a static strong field and certain weak fields with a distinct pole structure in order to show that the reflection takes place exactly at the poles. We also discuss the effect of possible sub-cycle structures. We extend this reflection picture then to weak fields which have no poles present and illustrate the effective reflections with explicit examples. An additional field strength dependence for the rate occurs in such cases. We analytically compute the characteristic threshold for the assisted mechanism given by the critical combined Keldysh parameter. We discuss significant differences between these two types of fields. For various backgrounds, we present the contributing instantons and perform analytical computations for the corresponding rates treating both fields nonperturbatively. (b) In addition, we also study the case with a nonstatic strong field which gives rise to the assisted dynamical mechanism. For different strong field profiles we investigate the impact on the critical combined Keldysh parameter. As an explicit example, we analytically compute the rate by employing the exact reflection points. The validity of the predictions
Euclidean mirrors. Enhanced vacuum decay from reflected instantons
Energy Technology Data Exchange (ETDEWEB)
Akal, Ibrahim [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Moortgat-Pick, Gudrid [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik
2017-06-15
We study the tunneling of virtual matter-antimatter pairs from the quantum vacuum in the presence of a spatially uniform temporal electric background composed of of a strong slow field superimposed with a weak rapid field. After analytic continuation to Euclidean spacetime we obtain from the instanton equations two critical points. While one of them is the closing point of the instanton path, the other serves as an Euclidean mirror which reflects and squeezes the instanton. It is this reflection and shrinking which is responsible for an enormous enhancement of the vacuum pair production rate. We discuss how important features of this mechanism can be analysed and understood via such a rotation in the complex plane. Consistent with previous studies, we consider certain examples where we apply weak fields with a distinct pole structure in order to show that the reflection takes place exactly at the poles. We also discuss the effect of possible sub-cycle structures. We extend this reflection picture to fields which have no poles present and illustrate the effective reflections with explicit examples. An additional field strength dependence for the rate occurs in such cases. We analytically compute the characteristic threshold for this mechanism given by the critical combined Keldysh parameter. We discuss significant differences between these two types of fields. For various backgrounds, we present the contributing instantons and perform analytical computations for the corresponding rates treating both fields nonperturbatively. The validity of the results is confirmed by numerical computations. Considering different profiles for the strong field, we also discuss its impact on the critical combined Keldysh parameter.
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.
High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems
International Nuclear Information System (INIS)
Wachowiak, M P; Sarlo, B B; Foster, A E Lambe
2014-01-01
Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task
High-dimensional single-cell cancer biology.
Irish, Jonathan M; Doxie, Deon B
2014-01-01
Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.
Three Dimensional Fast Exact Euclidean Distance (3D-FEED) Maps
Latecki, L.J.; Schouten, Theo E.; Mount, D.M.; Kuppens, Harco C.; Wu, A.Y.; van den Broek, Egon
2006-01-01
In image and video analysis, distance maps are frequently used. They provide the (Euclidean) distance (ED) of background pixels to the nearest object pixel. Recently, the Fast Exact Euclidean Distance (FEED) transformation was launched. In this paper, we present the three dimensional (3D) version of
Uniqueness of Gibbs states and global Markov property for Euclidean fields
International Nuclear Information System (INIS)
Albeverio, S.; Hoeegh-Krohn, R.
1981-01-01
The authors briefly discuss the proof of the uniqueness of solutions of the DLR equations (uniqueness of Gibbs states) in the class of regular generalized random fields (in the sense of having second moments bounded by those of some Euclidean field), for the Euclidean fields with trigonometric interaction. (Auth.)
Non-euclidean simplex optimization. [Application to potentiometric titration of Pu
Energy Technology Data Exchange (ETDEWEB)
Silver, G.L.
1977-08-15
Geometric optimization techniques useful for studying chemical equilibrium traditionally rely upon principles of euclidean geometry, but such algorithms may also be based upon principles of a non-euclidean geometry. The sequential simplex method is adapted to the hyperbolic plane, and application of optimization to problems such as the potentiometric titration of plutonium is suggested.
Squared Euclidean distance: a statistical test to evaluate plant community change
Raymond D. Ratliff; Sylvia R. Mori
1993-01-01
The concepts and a procedure for evaluating plant community change using the squared Euclidean distance (SED) resemblance function are described. Analyses are based on the concept that Euclidean distances constitute a sample from a population of distances between sampling units (SUs) for a specific number of times and SUs. With different times, the distances will be...
He, Ling Yan; Wang, Tie-Jun; Wang, Chuan
2016-07-11
High-dimensional quantum system provides a higher capacity of quantum channel, which exhibits potential applications in quantum information processing. However, high-dimensional universal quantum logic gates is difficult to achieve directly with only high-dimensional interaction between two quantum systems and requires a large number of two-dimensional gates to build even a small high-dimensional quantum circuits. In this paper, we propose a scheme to implement a general controlled-flip (CF) gate where the high-dimensional single photon serve as the target qudit and stationary qubits work as the control logic qudit, by employing a three-level Λ-type system coupled with a whispering-gallery-mode microresonator. In our scheme, the required number of interaction times between the photon and solid state system reduce greatly compared with the traditional method which decomposes the high-dimensional Hilbert space into 2-dimensional quantum space, and it is on a shorter temporal scale for the experimental realization. Moreover, we discuss the performance and feasibility of our hybrid CF gate, concluding that it can be easily extended to a 2n-dimensional case and it is feasible with current technology.
International Nuclear Information System (INIS)
Loran, Farhang
2004-01-01
We solve Klein-Gordon equation for massless scalars on (d+1)-dimensional Minkowski (Euclidean) space in terms of the Cauchy data on the hypersurface t=0. By inserting the solution into the action of massless scalars in Minkowski (Euclidean) space we obtain the action of dual theory on the boundary t=0 which is exactly the holographic dual of conformally coupled scalars on (d+1)-dimensional (Euclidean anti) de Sitter space obtained in (A)dS/CFT correspondence. The observed equivalence of dual theories is explained using the one-to-one map between conformally coupled scalar fields on Minkowski (Euclidean) space and (Euclidean anti) de Sitter space which is an isomorphism between the hypersurface t=0 of Minkowski (Euclidean) space and the boundary of (A)dS space
The Dirac equation in the Lobachevsky space-time
International Nuclear Information System (INIS)
Paramonov, D.V.; Paramonova, N.N.; Shavokhina, N.S.
2000-01-01
The product of the Lobachevsky space and the time axis is termed the Lobachevsky space-time. The Lobachevsky space is considered as a hyperboloid's sheet in the four-dimensional pseudo-Euclidean space. The Dirac-Fock-Ivanenko equation is reduced to the Dirac equation in two special forms by passing from Lame basis in the Lobachevsky space to the Cartesian basis in the enveloping pseudo-Euclidean space
Class prediction for high-dimensional class-imbalanced data
Directory of Open Access Journals (Sweden)
Lusa Lara
2010-10-01
Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class
High-dimensional change-point estimation: Combining filtering with convex optimization
Soh, Yong Sheng; Chandrasekaran, Venkat
2017-01-01
We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...
A general Euclidean connection for so(n,m) lie algebra and the algebraic approach to scattering
International Nuclear Information System (INIS)
Ionescu, R.A.
1994-11-01
We obtain a general Euclidean connection for so(n,m). This Euclidean connection allows an algebraic derivation of the S matrix and it reduces to the known one in suitable circumstances. (author). 8 refs
Applying recursive numerical integration techniques for solving high dimensional integrals
International Nuclear Information System (INIS)
Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan
2016-11-01
The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
Network Reconstruction From High-Dimensional Ordinary Differential Equations.
Chen, Shizhe; Shojaie, Ali; Witten, Daniela M
2017-01-01
We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.
Quantum correlation of high dimensional system in a dephasing environment
Ji, Yinghua; Ke, Qiang; Hu, Juju
2018-05-01
For a high dimensional spin-S system embedded in a dephasing environment, we theoretically analyze the time evolutions of quantum correlation and entanglement via Frobenius norm and negativity. The quantum correlation dynamics can be considered as a function of the decoherence parameters, including the ratio between the system oscillator frequency ω0 and the reservoir cutoff frequency ωc , and the different environment temperature. It is shown that the quantum correlation can not only measure nonclassical correlation of the considered system, but also perform a better robustness against the dissipation. In addition, the decoherence presents the non-Markovian features and the quantum correlation freeze phenomenon. The former is much weaker than that in the sub-Ohmic or Ohmic thermal reservoir environment.
Applying recursive numerical integration techniques for solving high dimensional integrals
Energy Technology Data Exchange (ETDEWEB)
Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik
2016-11-15
The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
On the space dimensionality based on metrics
International Nuclear Information System (INIS)
Gorelik, G.E.
1978-01-01
A new approach to space time dimensionality is suggested, which permits to take into account the possibility of altering dimensionality depending on the phenomenon scale. An attempt is made to give the definition of dimensionality, equivalent to a conventional definition for the Euclidean space and variety. The conventional definition of variety dimensionality is connected with the possibility of homeomorphic reflection of the Euclidean space on some region of each variety point
Maximally-localized position, Euclidean path-integral, and thermodynamics in GUP quantum mechanics
Bernardo, Reginald Christian S.; Esguerra, Jose Perico H.
2018-04-01
In dealing with quantum mechanics at very high energies, it is essential to adapt to a quasiposition representation using the maximally-localized states because of the generalized uncertainty principle. In this paper, we look at maximally-localized states as eigenstates of the operator ξ = X + iβP that we refer to as the maximally-localized position. We calculate the overlap between maximally-localized states and show that the identity operator can be expressed in terms of the maximally-localized states. Furthermore, we show that the maximally-localized position is diagonal in momentum-space and that the maximally-localized position and its adjoint satisfy commutation and anti-commutation relations reminiscent of the harmonic oscillator commutation and anti-commutation relations. As application, we use the maximally-localized position in developing the Euclidean path-integral and introduce the compact form of the propagator for maximal localization. The free particle momentum-space propagator and the propagator for maximal localization are analytically evaluated up to quadratic-order in β. Finally, we obtain a path-integral expression for the partition function of a thermodynamic system using the maximally-localized states. The partition function of a gas of noninteracting particles is evaluated. At temperatures exceeding the Planck energy, we obtain the gas' maximum internal energy N / 2 β and recover the zero heat capacity of an ideal gas.
Entropy, extremality, euclidean variations, and the equations of motion
Dong, Xi; Lewkowycz, Aitor
2018-01-01
We study the Euclidean gravitational path integral computing the Rényi entropy and analyze its behavior under small variations. We argue that, in Einstein gravity, the extremality condition can be understood from the variational principle at the level of the action, without having to solve explicitly the equations of motion. This set-up is then generalized to arbitrary theories of gravity, where we show that the respective entanglement entropy functional needs to be extremized. We also extend this result to all orders in Newton's constant G N , providing a derivation of quantum extremality. Understanding quantum extremality for mixtures of states provides a generalization of the dual of the boundary modular Hamiltonian which is given by the bulk modular Hamiltonian plus the area operator, evaluated on the so-called modular extremal surface. This gives a bulk prescription for computing the relative entropies to all orders in G N . We also comment on how these ideas can be used to derive an integrated version of the equations of motion, linearized around arbitrary states.
Euclidean quantum field theory and the Hawking effect
International Nuclear Information System (INIS)
Lapedes, A.S.
1978-01-01
Complex analytic continuation in a time variable in order to define a Feynman propagator is investigated in a general relativistic context. When external electric fields are present a complex analytic continuation in the electric charge is also introduced. The new Euclidean formalism is checked by reproducing Schwinger's special relativistic result for pair creation by an external, homogenous, electric field, and then applied to the Robinson-Bertotti universe. The Robinson-Bertotti universe, although unphysical, provides an interesting theoretical laboratory in which to investigate quantum effects, much as the unphysical Taub-NUT (Newman-Unti-Tamburino) universe does for purely classical general relativity. A conformally related problem of pair creation by a supercritically charged nucleus is also considered, and a sensible resolution is obtained to this classic problem. The essential mathematical point throughout is the use of the Feynman path-integral form of the propagator to motivate replacing hyperbolic equations by elliptic equations. The unique, bounded solution for the elliptic Green's function is then analytically continued back to physical values to define the Feynman Green's function
Numerical evaluation of tensor Feynman integrals in Euclidean kinematics
Energy Technology Data Exchange (ETDEWEB)
Gluza, J.; Kajda [Silesia Univ., Katowice (Poland). Inst. of Physics; Riemann, T.; Yundin, V. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)
2010-10-15
For the investigation of higher order Feynman integrals, potentially with tensor structure, it is highly desirable to have numerical methods and automated tools for dedicated, but sufficiently 'simple' numerical approaches. We elaborate two algorithms for this purpose which may be applied in the Euclidean kinematical region and in d=4-2{epsilon} dimensions. One method uses Mellin-Barnes representations for the Feynman parameter representation of multi-loop Feynman integrals with arbitrary tensor rank. Our Mathematica package AMBRE has been extended for that purpose, and together with the packages MB (M. Czakon) or MBresolve (A. V. Smirnov and V. A. Smirnov) one may perform automatically a numerical evaluation of planar tensor Feynman integrals. Alternatively, one may apply sector decomposition to planar and non-planar multi-loop {epsilon}-expanded Feynman integrals with arbitrary tensor rank. We automatized the preparations of Feynman integrals for an immediate application of the package sectordecomposition (C. Bogner and S. Weinzierl) so that one has to give only a proper definition of propagators and numerators. The efficiency of the two implementations, based on Mellin-Barnes representations and sector decompositions, is compared. The computational packages are publicly available. (orig.)
Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning
Sagun, Levent
This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would
Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok
2016-12-05
High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.
Wang, Zhiping; Chen, Jinyu; Yu, Benli
2017-02-20
We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Progress in high-dimensional percolation and random graphs
Heydenreich, Markus
2017-01-01
This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic. The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation. Part III, consist...
Effects of dependence in high-dimensional multiple testing problems
Directory of Open Access Journals (Sweden)
van de Wiel Mark A
2008-02-01
Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.
High-dimensional quantum cryptography with twisted light
International Nuclear Information System (INIS)
Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J
2015-01-01
Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)
Inference for High-dimensional Differential Correlation Matrices.
Cai, T Tony; Zhang, Anru
2016-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.
Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression
Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph
2017-10-01
In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.
Bayesian Subset Modeling for High-Dimensional Generalized Linear Models
Liang, Faming
2013-06-01
This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.
Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros
2018-05-01
We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.
Variational estimates for the mass gap of SU(2) Euclidean lattice gauge theory
International Nuclear Information System (INIS)
Hari Dass, N.D.
1984-10-01
The purpose of this letter is to report on the progress made in our understanding of series expansions for the masses in lattice gauge theories by the application of variational techniques to the Euclidean SU(2) lattice gauge theory. (Auth.)
Euclidean action for vacuum decay in a de Sitter universe
International Nuclear Information System (INIS)
Balek, V.; Demetrian, M.
2005-01-01
The behavior of the action of the instantons describing vacuum decay in a de Sitter is investigated. For a near-to-limit instanton (a Coleman-de Luccia instanton close to some Hawking-Moss instanton) we find approximate formulas for the Euclidean action by expanding the scalar field and the metric of the instanton in the powers of the scalar field amplitude. The order of the magnitude of the correction to the Hawking-Moss action depends on the order of the instanton (the number of crossings of the barrier by the scalar field): for instantons of odd and even orders the correction is of the fourth and third order in the scalar field amplitude, respectively. If a near-to-limit instanton of the first order exists in a potential with the curvature at the top of the barrier greater than 4x(Hubble constant) 2 , which is the case if the fourth derivative of the potential at the top of the barrier is greater than some negative limit value, the action of the instanton is less than the Hawking-Moss action and, consequently, the instanton determines the outcome of the vacuum decay if no other Coleman-de Luccia instanton is admitted by the potential. A numerical study shows that for the quartic potential the physical mode of the vacuum decay is given by the Coleman-de Luccia instanton of the first order also in the region of parameters in which the potential admits two instantons of the second order
Directory of Open Access Journals (Sweden)
Yuxian Zhang
2015-01-01
Full Text Available The quality index model in slashing process is difficult to build by reason of the outliers and noise data from original data. To the above problem, a fuzzy neural network based on non-Euclidean distance clustering is proposed in which the input space is partitioned into many local regions by the fuzzy clustering based on non-Euclidean distance so that the computation complexity is decreased, and fuzzy rule number is determined by validity function based on both the separation and the compactness among clusterings. Then, the premise parameters and consequent parameters are trained by hybrid learning algorithm. The parameters identification is realized; meanwhile the convergence condition of consequent parameters is obtained by Lyapunov function. Finally, the proposed method is applied to build the quality index model in slashing process in which the experimental data come from the actual slashing process. The experiment results show that the proposed fuzzy neural network for quality index model has lower computation complexity and faster convergence time, comparing with GP-FNN, BPNN, and RBFNN.
Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping
DEFF Research Database (Denmark)
Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor
2005-01-01
We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....
Linearization of Euclidean Norm Dependent Inequalities Applied to Multibeam Satellites Design
Camino , Jean-Thomas; Artigues , Christian; Houssin , Laurent; Mourgues , Stéphane
2016-01-01
Euclidean norm computations over continuous variables appear naturally in the constraints or in the objective of many problems in the optimization literature, possibly defining non-convex feasible regions or cost functions. When some other variables have discrete domains, it positions the problem in the challenging Mixed Integer Nonlinear Programming (MINLP) class. For any MINLP where the nonlinearity is only present in the form of inequality constraints involving the Euclidean norm, we propo...
Non-Hermitian systems of Euclidean Lie algebraic type with real energy spectra
Dey, Sanjib; Fring, Andreas; Mathanaranjan, Thilagarajah
2014-07-01
We study several classes of non-Hermitian Hamiltonian systems, which can be expressed in terms of bilinear combinations of Euclidean-Lie algebraic generators. The classes are distinguished by different versions of antilinear (PT)-symmetries exhibiting various types of qualitative behaviour. On the basis of explicitly computed non-perturbative Dyson maps we construct metric operators, isospectral Hermitian counterparts for which we solve the corresponding time-independent Schrödinger equation for specific choices of the coupling constants. In these cases general analytical expressions for the solutions are obtained in the form of Mathieu functions, which we analyze numerically to obtain the corresponding energy spectra. We identify regions in the parameter space for which the corresponding spectra are entirely real and also domains where the PT symmetry is spontaneously broken and sometimes also regained at exceptional points. In some cases it is shown explicitly how the threshold region from real to complex spectra is characterized by the breakdown of the Dyson maps or the metric operator. We establish the explicit relationship to models currently under investigation in the context of beam dynamics in optical lattices.
Directory of Open Access Journals (Sweden)
Q. Zhou
2017-07-01
Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
High-dimensional statistical inference: From vector to matrix
Zhang, Anru
Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The
Genuinely high-dimensional nonlocality optimized by complementary measurements
International Nuclear Information System (INIS)
Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung
2010-01-01
Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.
Approximation of High-Dimensional Rank One Tensors
Bachmayr, Markus
2013-11-12
Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.
Statistical mechanics of complex neural systems and high dimensional data
International Nuclear Information System (INIS)
Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya
2013-01-01
Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. (paper)
Approximation of High-Dimensional Rank One Tensors
Bachmayr, Markus; Dahmen, Wolfgang; DeVore, Ronald; Grasedyck, Lars
2013-01-01
Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called 'curse of dimensionality'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.
DEFF Research Database (Denmark)
Gravesen, Jens
2015-01-01
and found the MacAdam ellipses which are often interpreted as defining the metric tensor at their centres. An important question is whether it is possible to define colour coordinates such that the Euclidean distance in these coordinates correspond to human perception. Using cubic splines to represent......The space of colours is a fascinating space. It is a real vector space, but no matter what inner product you put on the space the resulting Euclidean distance does not correspond to human perception of difference between colours. In 1942 MacAdam performed the first experiments on colour matching...
Bhadra, Anindya
2013-04-22
We describe a Bayesian technique to (a) perform a sparse joint selection of significant predictor variables and significant inverse covariance matrix elements of the response variables in a high-dimensional linear Gaussian sparse seemingly unrelated regression (SSUR) setting and (b) perform an association analysis between the high-dimensional sets of predictors and responses in such a setting. To search the high-dimensional model space, where both the number of predictors and the number of possibly correlated responses can be larger than the sample size, we demonstrate that a marginalization-based collapsed Gibbs sampler, in combination with spike and slab type of priors, offers a computationally feasible and efficient solution. As an example, we apply our method to an expression quantitative trait loci (eQTL) analysis on publicly available single nucleotide polymorphism (SNP) and gene expression data for humans where the primary interest lies in finding the significant associations between the sets of SNPs and possibly correlated genetic transcripts. Our method also allows for inference on the sparse interaction network of the transcripts (response variables) after accounting for the effect of the SNPs (predictor variables). We exploit properties of Gaussian graphical models to make statements concerning conditional independence of the responses. Our method compares favorably to existing Bayesian approaches developed for this purpose. © 2013, The International Biometric Society.
Assessment of the Log-Euclidean Metric Performance in Diffusion Tensor Image Segmentation
Directory of Open Access Journals (Sweden)
Mostafa Charmi
2010-06-01
Full Text Available Introduction: Appropriate definition of the distance measure between diffusion tensors has a deep impact on Diffusion Tensor Image (DTI segmentation results. The geodesic metric is the best distance measure since it yields high-quality segmentation results. However, the important problem with the geodesic metric is a high computational cost of the algorithms based on it. The main goal of this paper is to assess the possible substitution of the geodesic metric with the Log-Euclidean one to reduce the computational cost of a statistical surface evolution algorithm. Materials and Methods: We incorporated the Log-Euclidean metric in the statistical surface evolution algorithm framework. To achieve this goal, the statistics and gradients of diffusion tensor images were defined using the Log-Euclidean metric. Numerical implementation of the segmentation algorithm was performed in the MATLAB software using the finite difference techniques. Results: In the statistical surface evolution framework, the Log-Euclidean metric was able to discriminate the torus and helix patterns in synthesis datasets and rat spinal cords in biological phantom datasets from the background better than the Euclidean and J-divergence metrics. In addition, similar results were obtained with the geodesic metric. However, the main advantage of the Log-Euclidean metric over the geodesic metric was the dramatic reduction of computational cost of the segmentation algorithm, at least by 70 times. Discussion and Conclusion: The qualitative and quantitative results have shown that the Log-Euclidean metric is a good substitute for the geodesic metric when using a statistical surface evolution algorithm in DTIs segmentation.
Renormalized G-convolution of n-point functions in quantum field theory. I. The Euclidean case
International Nuclear Information System (INIS)
Bros, Jacques; Manolessou-Grammaticou, Marietta.
1977-01-01
The notion of Feynman amplitude associated with a graph G in perturbative quantum field theory admits a generalized version in which each vertex v of G is associated with a general (non-perturbative) nsub(v)-point function Hsup(nsub(v)), nsub(v) denoting the number of lines which are incident to v in G. In the case where no ultraviolet divergence occurs, this has been performed directly in complex momentum space through Bros-Lassalle's G-convolution procedure. The authors propose a generalization of G-convolution which includes the case when the functions Hsup(nsub(v)) are not integrable at infinity but belong to a suitable class of slowly increasing functions. A finite part of the G-convolution integral is then defined through an algorithm which closely follows Zimmermann's renormalization scheme. The case of Euclidean four-momentum configurations is only treated
Matrix correlations for high-dimensional data: The modified RV-coefficient
Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van
2009-01-01
Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they
Directory of Open Access Journals (Sweden)
Hajime Matsui
2017-12-01
Full Text Available In this study, we consider codes over Euclidean domains modulo their ideals. In the first half of the study, we deal with arbitrary Euclidean domains. We show that the product of generator matrices of codes over the rings mod a and mod b produces generator matrices of all codes over the ring mod a b , i.e., this correspondence is onto. Moreover, we show that if a and b are coprime, then this correspondence is one-to-one, i.e., there exist unique codes over the rings mod a and mod b that produce any given code over the ring mod a b through the product of their generator matrices. In the second half of the study, we focus on the typical Euclidean domains such as the rational integer ring, one-variable polynomial rings, rings of Gaussian and Eisenstein integers, p-adic integer rings and rings of one-variable formal power series. We define the reduced generator matrices of codes over Euclidean domains modulo their ideals and show their uniqueness. Finally, we apply our theory of reduced generator matrices to the Hecke rings of matrices over these Euclidean domains.
On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means
Directory of Open Access Journals (Sweden)
George Livadiotis
2017-05-01
Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.
Products of Snowflaked Euclidean Lines Are Not Minimal for Looking Down
Directory of Open Access Journals (Sweden)
Joseph Matthieu
2017-11-01
Full Text Available We show that products of snowflaked Euclidean lines are not minimal for looking down. This question was raised in Fractured fractals and broken dreams, Problem 11.17, by David and Semmes. The proof uses arguments developed by Le Donne, Li and Rajala to prove that the Heisenberg group is not minimal for looking down. By a method of shortcuts, we define a new distance d such that the product of snowflaked Euclidean lines looks down on (RN , d, but not vice versa.
Biomarker identification and effect estimation on schizophrenia –a high dimensional data analysis
Directory of Open Access Journals (Sweden)
Yuanzhang eLi
2015-05-01
Full Text Available Biomarkers have been examined in schizophrenia research for decades. Medical morbidity and mortality rates, as well as personal and societal costs, are associated with schizophrenia patients. The identification of biomarkers and alleles, which often have a small effect individually, may help to develop new diagnostic tests for early identification and treatment. Currently, there is not a commonly accepted statistical approach to identify predictive biomarkers from high dimensional data. We used space Decomposition-Gradient-Regression method (DGR to select biomarkers, which are associated with the risk of schizophrenia. Then, we used the gradient scores, generated from the selected biomarkers, as the prediction factor in regression to estimate their effects. We also used an alternative approach, classification and regression tree (CART, to compare the biomarker selected by DGR and found about 70% of the selected biomarkers were the same. However, the advantage of DGR is that it can evaluate individual effects for each biomarker from their combined effect. In DGR analysis of serum specimens of US military service members with a diagnosis of schizophrenia from 1992 to 2005 and their controls, Alpha-1-Antitrypsin (AAT, Interleukin-6 receptor (IL-6r and Connective Tissue Growth Factor (CTGF were selected to identify schizophrenia for males; and Alpha-1-Antitrypsin (AAT, Apolipoprotein B (Apo B and Sortilin were selected for females. If these findings from military subjects are replicated by other studies, they suggest the possibility of a novel biomarker panel as an adjunct to earlier diagnosis and initiation of treatment.
On Euclidean connections for su(1,1), suq(1,1) and the algebraic approach to scattering
International Nuclear Information System (INIS)
Ionescu, R.A.
1994-11-01
We obtain a general Euclidean connection for su(1,1) and suq(1,1) algebras. Our Euclidean connection allows an algebraic derivation for the S matrix. These algebraic S matrices reduce to the known ones in suitable circumstances. Also, we obtain a map between su(1,1) and su q (1,1) representations. (author). 8 refs
Counting and classifying attractors in high dimensional dynamical systems.
Bagley, R J; Glass, L
1996-12-07
Randomly connected Boolean networks have been used as mathematical models of neural, genetic, and immune systems. A key quantity of such networks is the number of basins of attraction in the state space. The number of basins of attraction changes as a function of the size of the network, its connectivity and its transition rules. In discrete networks, a simple count of the number of attractors does not reveal the combinatorial structure of the attractors. These points are illustrated in a reexamination of dynamics in a class of random Boolean networks considered previously by Kauffman. We also consider comparisons between dynamics in discrete networks and continuous analogues. A continuous analogue of a discrete network may have a different number of attractors for many different reasons. Some attractors in discrete networks may be associated with unstable dynamics, and several different attractors in a discrete network may be associated with a single attractor in the continuous case. Special problems in determining attractors in continuous systems arise when there is aperiodic dynamics associated with quasiperiodicity of deterministic chaos.
Mitigating the Insider Threat Using High-Dimensional Search and Modeling
National Research Council Canada - National Science Library
Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago
2006-01-01
In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...
Usability Evaluation of an Augmented Reality System for Teaching Euclidean Vectors
Martin-Gonzalez, Anabel; Chi-Poot, Angel; Uc-Cetina, Victor
2016-01-01
Augmented reality (AR) is one of the emerging technologies that has demonstrated to be an efficient technological tool to enhance learning techniques. In this paper, we describe the development and evaluation of an AR system for teaching Euclidean vectors in physics and mathematics. The goal of this pedagogical tool is to facilitate user's…
Rooij, van I.; Stege, U.; Schactman, A.
2003-01-01
Recently there has been growing interest among psychologists in human performance on the Euclidean traveling salesperson problem (E-TSP). A debate has been initiated on what strategy people use in solving visually presented E-TSP instances. The most prominent hypothesis is the convex-hull
International Nuclear Information System (INIS)
Pordt, A.
1985-10-01
The author describes the Mayer expansion in Euclidean lattice field theory by comparing it with the statistical mechanics of polymer systems. In this connection he discusses the Borel summability and the analyticity of the activities on the lattice. Furthermore the relations between renormalization and the Mayer expansion are considered. (HSI)
Fast Exact Euclidean Distance (FEED): A new class of adaptable distance transforms
Schouten, Theo E.; van den Broek, Egon
2014-01-01
A new unique class of foldable distance transforms of digital images (DT) is introduced, baptized: Fast Exact Euclidean Distance (FEED) transforms. FEED class algorithms calculate the DT starting directly from the definition or rather its inverse. The principle of FEED class algorithms is
Fast Exact Euclidean Distance (FEED) : A new class of adaptable distance transforms
Schouten, Theo E.; van den Broek, Egon L.
2014-01-01
A new unique class of foldable distance transforms of digital images (DT) is introduced, baptized: Fast Exact Euclidean Distance (FEED) transforms. FEED class algorithms calculate the DT startingdirectly from the definition or rather its inverse. The principle of FEED class algorithms is introduced,
Loci of points in the Euclidean plane are deter- mined from ...
Indian Academy of Sciences (India)
Loci of points in the Euclidean plane are deter- mined from prescribed relations of the points with given points, and/or, lines. The depen- dence of these relations on parameters lead to the differential equations representing the fam- ily of loci under concern. Incidentally most of the differential equations thus obtained are non ...
Non-Euclidean spacetime structure and the two-slit experiment
International Nuclear Information System (INIS)
El Naschie, M.S.
2005-01-01
A simple mathematical model for the two-slit experiment is given to account for the wave-particle duality. Subsequently, the various solutions are interpreted via the experimental evidence as a property of the underlying non-Euclidean spacetime topology and geometry at the quantum level
Using a High-Dimensional Graph of Semantic Space to Model Relationships among Words
Directory of Open Access Journals (Sweden)
Alice F Jackson
2014-05-01
Full Text Available The GOLD model (Graph Of Language Distribution is a network model constructed based on co-occurrence in a large corpus of natural language that may be used to explore what information may be present in a graph-structured model of language, and what information may be extracted through theoretically-driven algorithms as well as standard graph analysis methods. The present study will employ GOLD to examine two types of relationship between words: semantic similarity and associative relatedness. Semantic similarity refers to the degree of overlap in meaning between words, while associative relatedness refers to the degree to which two words occur in the same schematic context. It is expected that a graph structured model of language constructed based on co-occurrence should easily capture associative relatedness, because this type of relationship is thought to be present directly in lexical co-occurrence. However, it is hypothesized that semantic similarity may be extracted from the intersection of the set of first-order connections, because two words that are semantically similar may occupy similar thematic or syntactic roles across contexts and thus would co-occur lexically with the same set of nodes. Two versions the GOLD model that differed in terms of the co-occurence window, bigGOLD at the paragraph level and smallGOLD at the adjacent word level, were directly compared to the performance of a well-established distributional model, Latent Semantic Analysis (LSA. The superior performance of the GOLD models (big and small suggest that a single acquisition and storage mechanism, namely co-occurrence, can account for associative and conceptual relationships between words and is more psychologically plausible than models using singular value decomposition.
Using a high-dimensional graph of semantic space to model relationships among words.
Jackson, Alice F; Bolger, Donald J
2014-01-01
The GOLD model (Graph Of Language Distribution) is a network model constructed based on co-occurrence in a large corpus of natural language that may be used to explore what information may be present in a graph-structured model of language, and what information may be extracted through theoretically-driven algorithms as well as standard graph analysis methods. The present study will employ GOLD to examine two types of relationship between words: semantic similarity and associative relatedness. Semantic similarity refers to the degree of overlap in meaning between words, while associative relatedness refers to the degree to which two words occur in the same schematic context. It is expected that a graph structured model of language constructed based on co-occurrence should easily capture associative relatedness, because this type of relationship is thought to be present directly in lexical co-occurrence. However, it is hypothesized that semantic similarity may be extracted from the intersection of the set of first-order connections, because two words that are semantically similar may occupy similar thematic or syntactic roles across contexts and thus would co-occur lexically with the same set of nodes. Two versions the GOLD model that differed in terms of the co-occurence window, bigGOLD at the paragraph level and smallGOLD at the adjacent word level, were directly compared to the performance of a well-established distributional model, Latent Semantic Analysis (LSA). The superior performance of the GOLD models (big and small) suggest that a single acquisition and storage mechanism, namely co-occurrence, can account for associative and conceptual relationships between words and is more psychologically plausible than models using singular value decomposition (SVD).
International Nuclear Information System (INIS)
Zhang, Liangwei; Lin, Jing; Karim, Ramin
2015-01-01
The accuracy of traditional anomaly detection techniques implemented on full-dimensional spaces degrades significantly as dimensionality increases, thereby hampering many real-world applications. This work proposes an approach to selecting meaningful feature subspace and conducting anomaly detection in the corresponding subspace projection. The aim is to maintain the detection accuracy in high-dimensional circumstances. The suggested approach assesses the angle between all pairs of two lines for one specific anomaly candidate: the first line is connected by the relevant data point and the center of its adjacent points; the other line is one of the axis-parallel lines. Those dimensions which have a relatively small angle with the first line are then chosen to constitute the axis-parallel subspace for the candidate. Next, a normalized Mahalanobis distance is introduced to measure the local outlier-ness of an object in the subspace projection. To comprehensively compare the proposed algorithm with several existing anomaly detection techniques, we constructed artificial datasets with various high-dimensional settings and found the algorithm displayed superior accuracy. A further experiment on an industrial dataset demonstrated the applicability of the proposed algorithm in fault detection tasks and highlighted another of its merits, namely, to provide preliminary interpretation of abnormality through feature ordering in relevant subspaces. - Highlights: • An anomaly detection approach for high-dimensional reliability data is proposed. • The approach selects relevant subspaces by assessing vectorial angles. • The novel ABSAD approach displays superior accuracy over other alternatives. • Numerical illustration approves its efficacy in fault detection applications
Nonrenormalizable quantum field models in four-dimensional space-time
International Nuclear Information System (INIS)
Raczka, R.
1978-01-01
The construction of no-cutoff Euclidean Green's functions for nonrenormalizable interactions L/sub I/(phi) = lambda∫ddelta (epsilon): expepsilonphi: in four-dimensional space-time is carried out. It is shown that all axioms for the generating functional of the Euclidean Green's function are satisfied except perhaps SO(4) invariance
Amir-Moez, A R; Sneddon, I N
1962-01-01
Elements of Linear Space is a detailed treatment of the elements of linear spaces, including real spaces with no more than three dimensions and complex n-dimensional spaces. The geometry of conic sections and quadric surfaces is considered, along with algebraic structures, especially vector spaces and transformations. Problems drawn from various branches of geometry are given.Comprised of 12 chapters, this volume begins with an introduction to real Euclidean space, followed by a discussion on linear transformations and matrices. The addition and multiplication of transformations and matrices a
Finite Metric Spaces of Strictly negative Type
DEFF Research Database (Denmark)
Hjorth, Poul G.
If a finite metric space is of strictly negative type then its transfinite diameter is uniquely realized by an infinite extent (“load vector''). Finite metric spaces that have this property include all trees, and all finite subspaces of Euclidean and Hyperbolic spaces. We prove that if the distance...
Low-dimensional geometry from euclidean surfaces to hyperbolic knots
Bonahon, Francis
2009-01-01
The study of 3-dimensional spaces brings together elements from several areas of mathematics. The most notable are topology and geometry, but elements of number theory and analysis also make appearances. In the past 30 years, there have been striking developments in the mathematics of 3-dimensional manifolds. This book aims to introduce undergraduate students to some of these important developments. Low-Dimensional Geometry starts at a relatively elementary level, and its early chapters can be used as a brief introduction to hyperbolic geometry. However, the ultimate goal is to describe the very recently completed geometrization program for 3-dimensional manifolds. The journey to reach this goal emphasizes examples and concrete constructions as an introduction to more general statements. This includes the tessellations associated to the process of gluing together the sides of a polygon. Bending some of these tessellations provides a natural introduction to 3-dimensional hyperbolic geometry and to the theory o...
Energy Technology Data Exchange (ETDEWEB)
Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor [University of Warsaw, Faculty of Chemistry (Poland); Billeter, Martin, E-mail: martin.billeter@chem.gu.se [University of Gothenburg, Biophysics Group, Department of Chemistry and Molecular Biology (Sweden)
2012-09-15
While NMR studies of proteins typically aim at structure, dynamics or interactions, resonance assignments represent in almost all cases the initial step of the analysis. With increasing complexity of the NMR spectra, for example due to decreasing extent of ordered structure, this task often becomes both difficult and time-consuming, and the recording of high-dimensional data with high-resolution may be essential. Random sampling of the evolution time space, combined with sparse multidimensional Fourier transform (SMFT), allows for efficient recording of very high dimensional spectra ({>=}4 dimensions) while maintaining high resolution. However, the nature of this data demands for automation of the assignment process. Here we present the program TSAR (Tool for SMFT-based Assignment of Resonances), which exploits all advantages of SMFT input. Moreover, its flexibility allows to process data from any type of experiments that provide sequential connectivities. The algorithm was tested on several protein samples, including a disordered 81-residue fragment of the {delta} subunit of RNA polymerase from Bacillus subtilis containing various repetitive sequences. For our test examples, TSAR achieves a high percentage of assigned residues without any erroneous assignments.
3D-Ising model as a string theory in three-dimensional euclidean space
International Nuclear Information System (INIS)
Sedrakyan, A.
1992-11-01
A three-dimensional string model is analyzed in the strong coupling regime. The contribution of surfaces with different topology to the partition function is essential. A set of corresponding models is discovered. Their critical indices, which depend on two integers (m,n) are calculated analytically. The critical indices of the three-dimensional Ising model should belong to this set. A possible connection with the chain of three dimensional lattice Pott's models is pointed out. (author) 22 refs.; 2 figs
On the areas of various bodies in the Euclidean space: The case of irregular convex polygons
International Nuclear Information System (INIS)
Ozoemena, P.C.
1988-11-01
A theorem is proposed for the areas of n-sided irregular convex polygons, of given length of sides. The theorem is illustrated as a simple but powerful one in estimating the areas of irregular polygons, being dependent only on the number of sides n (and not on any of the explicit angles) of the irregular polygon. Finally, because of the global symmetry shown by equilateral triangles, squares and circles under group (gauge) theory, the relationships governing their areas, when they are inscribed or escribed in one another are discussed as riders, and some areas of their applications in graph theory, ratios and maxima and minima problems of differential calculus briefly mentioned. (author). 11 refs, 6 figs, 1 tab
Lorentz-force equations as Heisenberg equations for a quantum system in the euclidean space
International Nuclear Information System (INIS)
Rodriguez D, R.
2007-01-01
In an earlier work, the dynamic equations for a relativistic charged particle under the action of electromagnetic fields were formulated by R. Yamaleev in terms of external, as well as internal momenta. Evolution equations for external momenta, the Lorentz-force equations, were derived from the evolution equations for internal momenta. The mapping between the observables of external and internal momenta are related by Viete formulae for a quadratic polynomial, the characteristic polynomial of the relativistic dynamics. In this paper we show that the system of dynamic equations, can be cast into the Heisenberg scheme for a four-dimensional quantum system. Within this scheme the equations in terms of internal momenta play the role of evolution equations for a state vector, whereas the external momenta obey the Heisenberg equation for an operator evolution. The solutions of the Lorentz-force equation for the motion inside constant electromagnetic fields are presented via pentagonometric functions. (Author)
Wang, Xueyi
2012-02-08
The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.
Self-dual solutions to Euclidean Yang-Mills equations
International Nuclear Information System (INIS)
Corrigan, E.
1979-01-01
The paper provides an introduction to two approaches towards understanding the classical Yang-Mills field equations. On the one hand, the work of Atiyah and Ward showed that the self-dual equations, which are non-linear, could be regarded as a set of linear equations which turned out to be related to each other by Baecklund transformations. Fundamental to their procedure was the observation that the information carried by the vector potential could be coded into the structure of certain analytic vector bundles over a three dimensional projective space. The classification of these bundles and the subsequent recovery of the gauge field led to the infinite set of ansaetze, corresponding to the sets of linear equation mentioned already. On the other hand, Atiyah, Hitchin, Drinfeld and Manin have recently constructed, completely algebraically, the bundles of interest and indicated how the Yang-Mills potential may be obtained. Remarkably, their construction differs very little as the gauge group is changed (to any of the classical compact groups) and, uses only the elementary operations of linear algebra to yield potentials as rational functions of the spatial coordinates. (Auth.)
An excursion through elementary mathematics, volume ii euclidean geometry
Caminha Muniz Neto, Antonio
2018-01-01
This book provides a comprehensive, in-depth overview of elementary mathematics as explored in Mathematical Olympiads around the world. It expands on topics usually encountered in high school and could even be used as preparation for a first-semester undergraduate course. This second volume covers Plane Geometry, Trigonometry, Space Geometry, Vectors in the Plane, Solids and much more. As part of a collection, the book differs from other publications in this field by not being a mere selection of questions or a set of tips and tricks that applies to specific problems. It starts from the most basic theoretical principles, without being either too general or too axiomatic. Examples and problems are discussed only if they are helpful as applications of the theory. Propositions are proved in detail and subsequently applied to Olympic problems or to other problems at the Olympic level. The book also explores some of the hardest problems presented at National and International Mathematics Olympiads, as well as many...
Engineering two-photon high-dimensional states through quantum interference
Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew
2016-01-01
Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang
2017-09-27
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-01-01
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
Membrane paradigm and entropy of black holes in the Euclidean action approach
International Nuclear Information System (INIS)
Lemos, Jose P. S.; Zaslavskii, Oleg B.
2011-01-01
The membrane paradigm approach to black holes fixes in the vicinity of the event horizon a fictitious surface, the stretched horizon, so that the spacetime outside remains unchanged and the spacetime inside is vacuum. Using this powerful method, several black hole properties have been found and settled, such as the horizon's viscosity, electrical conductivity, resistivity, as well as other properties. On the other hand, the Euclidean action approach to black hole spacetimes has been very fruitful in understanding black hole entropy. Combining both the Euclidean action and membrane paradigm approaches, a direct derivation of the black hole entropy is given. In the derivation, it is considered that the only fields present are the gravitational and matter fields, with no electric field.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications
Directory of Open Access Journals (Sweden)
M. Revathy
2015-01-01
Full Text Available Low-density parity-check (LDPC codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax, and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.
Revathy, M; Saravanan, R
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.
Structure functions at small xBj in a Euclidean field theory approach
International Nuclear Information System (INIS)
Hebecker, A.; Meggiolaro, E.; Nachtmann, O.
2000-01-01
The small-x Bj limit of deep inelastic scattering is related to the high-energy limit of the forward Compton amplitude in a familiar way. We show that the analytic continuation of this amplitude in the energy variable is calculable from a matrix element in Euclidean field theory. This matrix element can be written as a Euclidean functional integral in an effective field theory. Its effective Lagrangian has a simple expression in terms of the original Lagrangian. The functional integral expression obtained can, at least in principle, be evaluated using genuinely non-perturbative methods, e.g., on the lattice. Thus, a fundamentally new approach to the long-standing problem of structure functions at very small x Bj seems possible. We give arguments that the limit x Bj →0 corresponds to a critical point of the effective field theory where the correlation length becomes infinite in one direction
Linear stability theory as an early warning sign for transitions in high dimensional complex systems
International Nuclear Information System (INIS)
Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft
2016-01-01
We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)
Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton
2014-07-30
Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.
International Nuclear Information System (INIS)
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Energy Technology Data Exchange (ETDEWEB)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Directory of Open Access Journals (Sweden)
Thenmozhi Srinivasan
2015-01-01
Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.
The validation and assessment of machine learning: a game of prediction from high-dimensional data
DEFF Research Database (Denmark)
Pers, Tune Hannes; Albrechtsen, A; Holst, C
2009-01-01
In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....
International Nuclear Information System (INIS)
Dasgupta, I.
1998-01-01
We discuss new bounce-like (but non-time-reversal-invariant) solutions to Euclidean equations of motion, which we dub boomerons. In the Euclidean path integral approach to quantum theories, boomerons make an imaginary contribution to the vacuum energy. The fake vacuum instability can be removed by cancelling boomeron contributions against contributions from time reversed boomerons (anti-boomerons). The cancellation rests on a sign choice whose significance is not completely understood in the path integral method. (orig.)
An irregular grid approach for pricing high-dimensional American options
Berridge, S.J.; Schumacher, J.M.
2008-01-01
We propose and test a new method for pricing American options in a high-dimensional setting. The method is centered around the approximation of the associated complementarity problem on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE
CSIR Research Space (South Africa)
Giovannini, D
2013-06-01
Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...
Global communication schemes for the numerical solution of high-dimensional PDEs
DEFF Research Database (Denmark)
Hupp, Philipp; Heene, Mario; Jacob, Riko
2016-01-01
The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
Estimating the effect of a variable in a high-dimensional regression model
DEFF Research Database (Denmark)
Jensen, Peter Sandholt; Wurtz, Allan
assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...
Multi-Scale Factor Analysis of High-Dimensional Brain Signals
Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain
2017-01-01
In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive
Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization
Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)
2016-01-01
textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main
Berridge, S.J.; Schumacher, J.M.
2004-01-01
We propose a method for pricing high-dimensional American options on an irregular grid; the method involves using quadratic functions to approximate the local effect of the Black-Scholes operator.Once such an approximation is known, one can solve the pricing problem by time stepping in an explicit
Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids
bin Zubair, H.; Oosterlee, C.E.; Wienands, R.
2006-01-01
This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We
An Irregular Grid Approach for Pricing High-Dimensional American Options
Berridge, S.J.; Schumacher, J.M.
2004-01-01
We propose and test a new method for pricing American options in a high-dimensional setting.The method is centred around the approximation of the associated complementarity problem on an irregular grid.We approximate the partial differential operator on this grid by appealing to the SDE
Pricing and hedging high-dimensional American options : an irregular grid approach
Berridge, S.; Schumacher, H.
2002-01-01
We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE
Tracking in Object Action Space
DEFF Research Database (Denmark)
Krüger, Volker; Herzog, Dennis
2013-01-01
the space of the object affordances, i.e., the space of possible actions that are applied on a given object. This way, 3D body tracking reduces to action tracking in the object (and context) primed parameter space of the object affordances. This reduces the high-dimensional joint-space to a low...
Finite Topological Spaces as a Pedagogical Tool
Helmstutler, Randall D.; Higginbottom, Ryan S.
2012-01-01
We propose the use of finite topological spaces as examples in a point-set topology class especially suited to help students transition into abstract mathematics. We describe how carefully chosen examples involving finite spaces may be used to reinforce concepts, highlight pathologies, and develop students' non-Euclidean intuition. We end with a…
International Nuclear Information System (INIS)
Liu, W; Sawant, A; Ruan, D
2016-01-01
Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit more descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction
Triebel, Hans
1983-01-01
The book deals with the two scales Bsp,q and Fsp,q of spaces of distributions, where -8spaces, such as Hölder spaces, Zygmund classes, Sobolev spaces, Besov spaces, Bessel-potential spaces, Hardy spaces and spaces of BMO-type. It is the main aim of this book to give a unified treatment of the corresponding spaces on the Euclidean n-space Rn in the framework of Fourier analysis, which is based on the technique of maximal functions, Fourier multipliers and interpolation assertions. These topics are treated in Chapter 2, which is the heart
Absence of even-integer ζ-function values in Euclidean physical quantities in QCD
Jamin, Matthias; Miravitllas, Ramon
2018-04-01
At order αs4 in perturbative quantum chromodynamics, even-integer ζ-function values are present in Euclidean physical correlation functions like the scalar quark correlation function or the scalar gluonium correlator. We demonstrate that these contributions cancel when the perturbative expansion is expressed in terms of the so-called C-scheme coupling αˆs which has recently been introduced in Ref. [1]. It is furthermore conjectured that a ζ4 term should arise in the Adler function at order αs5 in the MS ‾-scheme, and that this term is expected to disappear in the C-scheme as well.
DEFF Research Database (Denmark)
Meng, Weizhi; Li, Wenjuan; Wang, Yu
2017-01-01
and healthcare personnel. The underlying network architecture to support such devices is also referred to as medical smartphone networks (MSNs). Similar to other networks, MSNs also suffer from various attacks like insider attacks (e.g., leakage of sensitive patient information by a malicious insider......). In this work, we focus on MSNs and design a trust-based intrusion detection approach through Euclidean distance-based behavioral profiling to detect malicious devices (or called nodes). In the evaluation, we collaborate with healthcare organizations and implement our approach in a real simulated MSN...
Exact Boson-Fermion Duality on a 3D Euclidean Lattice
Chen, Jing-Yuan; Son, Jun Ho; Wang, Chao; Raghu, S.
2018-01-01
The idea of statistical transmutation plays a crucial role in descriptions of the fractional quantum Hall effect. However, a recently conjectured duality between a critical boson and a massless two-component Dirac fermion extends this notion to gapless systems. This duality sheds light on highly nontrivial problems such as the half-filled Landau level, the superconductor-insulator transition, and surface states of strongly coupled topological insulators. Although this boson-fermion duality has undergone many consistency checks, it has remained unproven. We describe the duality in a nonperturbative fashion using an exact UV mapping of partition functions on a 3D Euclidean lattice.
Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data
Directory of Open Access Journals (Sweden)
András Király
2014-01-01
Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca
2013-01-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza
2013-08-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014
Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina
2016-01-01
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...
Su, Yapeng; Shi, Qihui; Wei, Wei
2017-02-01
New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube
Zou, Shuzhi; Zhao, Li; Hu, Kongfa
The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen
2018-01-25
Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please
Controlling chaos in low and high dimensional systems with periodic parametric perturbations
International Nuclear Information System (INIS)
Mirus, K.A.; Sprott, J.C.
1998-06-01
The effect of applying a periodic perturbation to an accessible parameter of various chaotic systems is examined. Numerical results indicate that perturbation frequencies near the natural frequencies of the unstable periodic orbits of the chaotic systems can result in limit cycles for relatively small perturbations. Such perturbations can also control or significantly reduce the dimension of high-dimensional systems. Initial application to the control of fluctuations in a prototypical magnetic fusion plasma device will be reviewed
A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem
Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša
2014-01-01
Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...
GAMLSS for high-dimensional data – a flexible approach based on boosting
Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas; Schmid, Matthias
2010-01-01
Generalized additive models for location, scale and shape (GAMLSS) are a popular semi-parametric modelling approach that, in contrast to conventional GAMs, regress not only the expected mean but every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSS are infeasible for high-dimensional data setups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algo...
Preface [HD3-2015: International meeting on high-dimensional data-driven science
International Nuclear Information System (INIS)
2016-01-01
A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)
Runcie, Daniel E; Mukherjee, Sayan
2013-07-01
Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.
Hypergraph-based anomaly detection of high-dimensional co-occurrences.
Silva, Jorge; Willett, Rebecca
2009-03-01
This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.
Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.
Zhang, Ying-Ying; Yang, Cai; Zhang, Ping
2017-05-01
In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Gray, J.
1979-01-01
An historical and chronological account of mathematics Familiarity with simple equations and elements of trigonometry is needed but no specialist knowledge is assumed although difficult problems are discussed. By discussion of the difficulties and confusions it is hoped to understand mathematics as a dynamic activity. Beginning with early Greek mathematics, the Eastern legacy and the transition to deductive and geometric thinking the problem of parallels is then encountered and discussed. The second part of the book takes the story from Wallis, Saccheri and Lambert through to its resolution by Gauss, Lobachevskii, Bolyai, Riemann and Bettrami. The background of the 19th century theory of surfaces is given. The third part gives an account of Einstein's theories based on what has gone before, moving from a Newtonian-Euclidean picture to an Einsteinian-nonEuclidean one. A brief account of gravitation, the nature of space and black holes concludes the book. (UK)
Directory of Open Access Journals (Sweden)
Lambert Marie-Ève
2012-06-01
Full Text Available Abstract Background Porcine reproductive and respiratory syndrome (PRRS is a viral disease that has a major economic impact for the swine industry. Its control is mostly directed towards preventing its spread which requires a better understanding of the mechanisms of transmission of the virus between herds. The objectives of this study were to describe the genetic diversity and to assess the correlation among genetic, Euclidean and temporal distances and ownership to better understand pathways of transmission. Results A cross-sectional study was conducted on sites located in a high density area of swine production in Quebec. Geographical coordinates (longitude/latitude, date of submission and ownership were obtained for each site. ORF5 sequencing was attempted on PRRSV positive sites. Proportion of pairwise combinations of strains having ≥98% genetic homology were analysed according to Euclidean distances and ownership. Correlations between genetic, Euclidean and temporal distances and ownership were assessed using Mantel tests on continuous and binary matrices. Sensitivity of the correlations between genetic and Euclidean as well as temporal distances was evaluated for different Euclidean and temporal distance thresholds. An ORF5 sequence was identified for 132 of the 176 (75% PRRSV positive sites; 122 were wild-type strains. The mean (min-max genetic, Euclidean and temporal pairwise distances were 11.6% (0–18.7, 15.0 km (0.04-45.7 and 218 days (0–852, respectively. Significant positive correlations were observed between genetic and ownership, genetic and Euclidean and between genetic and temporal binary distances. The relationship between genetic and ownership suggests either common sources of animals or semen, employees, technical services or vehicles, whereas that between genetic and Euclidean binary distances is compatible with area spread of the virus. The latter correlation was observed only up to 5 km. Conclusions This study
On-chip generation of high-dimensional entangled quantum states and their coherent control.
Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto
2017-06-28
Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.
High-dimensional chaos from self-sustained collisions of solitons
Energy Technology Data Exchange (ETDEWEB)
Yildirim, O. Ozgur, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Cavium, Inc., 600 Nickerson Rd., Marlborough, Massachusetts 01752 (United States); Ham, Donhee, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Harvard University, 33 Oxford St., Cambridge, Massachusetts 02138 (United States)
2014-06-16
We experimentally demonstrate chaos generation based on collisions of electrical solitons on a nonlinear transmission line. The nonlinear line creates solitons, and an amplifier connected to it provides gain to these solitons for their self-excitation and self-sustenance. Critically, the amplifier also provides a mechanism to enable and intensify collisions among solitons. These collisional interactions are of intrinsically nonlinear nature, modulating the phase and amplitude of solitons, thus causing chaos. This chaos generated by the exploitation of the nonlinear wave phenomena is inherently high-dimensional, which we also demonstrate.
A novel algorithm of artificial immune system for high-dimensional function numerical optimization
Institute of Scientific and Technical Information of China (English)
DU Haifeng; GONG Maoguo; JIAO Licheng; LIU Ruochen
2005-01-01
Based on the clonal selection theory and immune memory theory, a novel artificial immune system algorithm, immune memory clonal programming algorithm (IMCPA), is put forward. Using the theorem of Markov chain, it is proved that IMCPA is convergent. Compared with some other evolutionary programming algorithms (like Breeder genetic algorithm), IMCPA is shown to be an evolutionary strategy capable of solving complex machine learning tasks, like high-dimensional function optimization, which maintains the diversity of the population and avoids prematurity to some extent, and has a higher convergence speed.
Computing and visualizing time-varying merge trees for high-dimensional data
Energy Technology Data Exchange (ETDEWEB)
Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)
2017-06-03
We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
High-dimensional data: p >> n in mathematical statistics and bio-medical applications
Van De Geer, Sara A.; Van Houwelingen, Hans C.
2004-01-01
The workshop 'High-dimensional data: p >> n in mathematical statistics and bio-medical applications' was held at the Lorentz Center in Leiden from 9 to 20 September 2002. This special issue of Bernoulli contains a selection of papers presented at that workshop. ¶ The introduction of high-throughput micro-array technology to measure gene-expression levels and the publication of the pioneering paper by Golub et al. (1999) has brought to life a whole new branch of data analysis under the name of...
Clear evidence of a continuum theory of 4D Euclidean simplicial quantum gravity
International Nuclear Information System (INIS)
Egawa, H.S.; Horata, S.; Yukawa, T.
2002-01-01
Four-dimensional (4D) simplicial quantum gravity coupled to both scalar fields (N X ) and gauge fields (N A ) has been studied using Monte-Carlo simulations. The matter dependence of the string susceptibility exponent γ (4) is estimated. Furthermore, we compare our numerical results with Background-Metric-Independent (BMI) formulation conjectured to describe the quantum field theory of gravity in 4D. The numerical results suggest that the 4D simplicial quantum gravity is related to the conformal gravity in 4D. Therefore, we propose a phase structure in detail with adding both scalar and gauge fields and discuss the possibility and the property of a continuum theory of 4D Euclidean simplicial quantum gravity
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
Hadronic vacuum polarization in QCD and its evaluation in Euclidean spacetime
de Rafael, Eduardo
2017-07-01
We discuss a new technique to evaluate integrals of QCD Green's functions in the Euclidean based on their Mellin-Barnes representation. We present as a first application the evaluation of the lowest order hadronic vacuum polarization (HVP) contribution to the anomalous magnetic moment of the muon 1/2 (gμ-2 )HVP≡aμHVP . It is shown that with a precise determination of the slope and curvature of the HVP function at the origin from lattice QCD (LQCD), one can already obtain a result for aμHVP which may serve as a test of the determinations based on experimental measurements of the e+e- annihilation cross section into hadrons.
Euler numbers of four-dimensional rotating black holes with the Euclidean signature
International Nuclear Information System (INIS)
Ma Zhengze
2003-01-01
For a black hole's spacetime manifold in the Euclidean signature, its metric is positive definite and therefore a Riemannian manifold. It can be regarded as a gravitational instanton and a topological characteristic which is the Euler number to which it is associated. In this paper we derive a formula for the Euler numbers of four-dimensional rotating black holes by the integral of the Euler density on the spacetime manifolds of black holes. Using this formula, we obtain that the Euler numbers of Kerr and Kerr-Newman black holes are 2. We also obtain that the Euler number of the Kerr-Sen metric in the heterotic string theory with one boost angle nonzero is 2, which is in accordance with its topology
Solution for a bipartite Euclidean traveling-salesman problem in one dimension
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Unstable spiral waves and local Euclidean symmetry in a model of cardiac tissue
International Nuclear Information System (INIS)
Marcotte, Christopher D.; Grigoriev, Roman O.
2015-01-01
This paper investigates the properties of unstable single-spiral wave solutions arising in the Karma model of two-dimensional cardiac tissue. In particular, we discuss how such solutions can be computed numerically on domains of arbitrary shape and study how their stability, rotational frequency, and spatial drift depend on the size of the domain as well as the position of the spiral core with respect to the boundaries. We also discuss how the breaking of local Euclidean symmetry due to finite size effects as well as the spatial discretization of the model is reflected in the structure and dynamics of spiral waves. This analysis allows identification of a self-sustaining process responsible for maintaining the state of spiral chaos featuring multiple interacting spirals
Unstable spiral waves and local Euclidean symmetry in a model of cardiac tissue.
Marcotte, Christopher D; Grigoriev, Roman O
2015-06-01
This paper investigates the properties of unstable single-spiral wave solutions arising in the Karma model of two-dimensional cardiac tissue. In particular, we discuss how such solutions can be computed numerically on domains of arbitrary shape and study how their stability, rotational frequency, and spatial drift depend on the size of the domain as well as the position of the spiral core with respect to the boundaries. We also discuss how the breaking of local Euclidean symmetry due to finite size effects as well as the spatial discretization of the model is reflected in the structure and dynamics of spiral waves. This analysis allows identification of a self-sustaining process responsible for maintaining the state of spiral chaos featuring multiple interacting spirals.
Finite Metric Spaces of Strictly Negative Type
DEFF Research Database (Denmark)
Hjorth, Poul; Lisonek, P.; Markvorsen, Steen
1998-01-01
of Euclidean spaces. We prove that, if the distance matrix is both hypermetric and regular, then it is of strictly negative type. We show that the strictly negative type finite subspaces of spheres are precisely those which do not contain two pairs of antipodal points. In connection with an open problem raised...
Directory of Open Access Journals (Sweden)
Septa Cahyani
2018-04-01
Full Text Available The human ability to recognize a variety of objects, however complex the object, is the special ability that humans possess. Any normal human will have no difficulty in recognizing handwriting objects between an author and another author. With the rapid development of digital technology, the human ability to recognize handwriting objects has been applied in a program known as Computer Vision. This study aims to create identification system different types of handwriting capital letters that have different sizes, thickness, shape, and tilt (distinctive features in handwriting using Linear Discriminant Analysis (LDA and Euclidean Distance methods. LDA is used to obtain characteristic characteristics of the image and provide the distance between the classes becomes larger, while the distance between training data in one class becomes smaller, so that the introduction time of digital image of handwritten capital letter using Euclidean Distance becomes faster computation time (by searching closest distance between training data and data testing. The results of testing the sample data showed that the image resolution of 50x50 pixels is the exact image resolution used for data as much as 1560 handwritten capital letter data compared to image resolution 25x25 pixels and 40x40 pixels. While the test data and training data testing using the method of 10-fold cross validation where 1404 for training data and 156 for data testing showed identification of digital image handwriting capital letter has an average effectiveness of the accuracy rate of 75.39% with the average time computing of 0.4199 seconds.
Ait-Haddou, Rachid
2015-01-01
We show that the best degree reduction of a given polynomial P from degree n to m with respect to the discrete (Formula presented.)-norm is equivalent to the best Euclidean distance of the vector of h-Bézier coefficients of P from the vector
Directory of Open Access Journals (Sweden)
Robert M. Yamaleev
2013-01-01
Full Text Available The hyperbolic cosines and sines theorems for the curvilinear triangle bounded by circular arcs of three intersecting circles are formulated and proved by using the general complex calculus. The method is based on a key formula establishing a relationship between exponential function and the cross-ratio. The proofs are carried out on Euclidean plane.
International Nuclear Information System (INIS)
Haba, Z.
1981-01-01
In the usual models of Euclidean field theory the Schwinger functions are moments of a positive measure. In this paper the author discusses the basic properties of the measure μ, i.e. properties of the sample paths of the random field. (Auth.)
Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.
Zhao, Yize; Kang, Jian; Long, Qi
2018-01-01
Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.
Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate
Directory of Open Access Journals (Sweden)
Seokhoon Kim
2015-01-01
Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.
Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search
Directory of Open Access Journals (Sweden)
Simon Fong
2013-01-01
Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.
The validation and assessment of machine learning: a game of prediction from high-dimensional data.
Directory of Open Access Journals (Sweden)
Tune H Pers
Full Text Available In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often implies that multiple methods are tested and compared on the same set of data. This is particularly difficult in situations that are prone to over-fitting where the number of subjects is low compared to the number of potential predictors. The article presents a game which provides some grounds for conducting a fair model comparison. Each player selects a modeling strategy for predicting individual response from potential predictors. A strictly proper scoring rule, bootstrap cross-validation, and a set of rules are used to make the results obtained with different strategies comparable. To illustrate the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively.
High-dimensional quantum key distribution with the entangled single-photon-added coherent state
Energy Technology Data Exchange (ETDEWEB)
Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)
2017-04-25
High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.
A Feature Subset Selection Method Based On High-Dimensional Mutual Information
Directory of Open Access Journals (Sweden)
Chee Keong Kwoh
2011-04-01
Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.
Using High-Dimensional Image Models to Perform Highly Undetectable Steganography
Pevný, Tomáš; Filler, Tomáš; Bas, Patrick
This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.
Quantum secret sharing based on modulated high-dimensional time-bin entanglement
International Nuclear Information System (INIS)
Takesue, Hiroki; Inoue, Kyo
2006-01-01
We propose a scheme for quantum secret sharing (QSS) that uses a modulated high-dimensional time-bin entanglement. By modulating the relative phase randomly by {0,π}, a sender with the entanglement source can randomly change the sign of the correlation of the measurement outcomes obtained by two distant recipients. The two recipients must cooperate if they are to obtain the sign of the correlation, which is used as a secret key. We show that our scheme is secure against intercept-and-resend (IR) and beam splitting attacks by an outside eavesdropper thanks to the nonorthogonality of high-dimensional time-bin entangled states. We also show that a cheating attempt based on an IR attack by one of the recipients can be detected by changing the dimension of the time-bin entanglement randomly and inserting two 'vacant' slots between the packets. Then, cheating attempts can be detected by monitoring the count rate in the vacant slots. The proposed scheme has better experimental feasibility than previously proposed entanglement-based QSS schemes
Similarity measurement method of high-dimensional data based on normalized net lattice subspace
Institute of Scientific and Technical Information of China (English)
Li Wenfa; Wang Gongming; Li Ke; Huang Su
2017-01-01
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
Yu, Hualong; Ni, Jun
2014-01-01
Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
High-dimensional quantum key distribution with the entangled single-photon-added coherent state
International Nuclear Information System (INIS)
Wang, Yang; Bao, Wan-Su; Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei
2017-01-01
High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.
High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.
Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton
2017-11-03
Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.
Zhu, Lingxue; Lei, Jing; Devlin, Bernie; Roeder, Kathryn
2017-09-01
Scientists routinely compare gene expression levels in cases versus controls in part to determine genes associated with a disease. Similarly, detecting case-control differences in co-expression among genes can be critical to understanding complex human diseases; however statistical methods have been limited by the high dimensional nature of this problem. In this paper, we construct a sparse-Leading-Eigenvalue-Driven (sLED) test for comparing two high-dimensional covariance matrices. By focusing on the spectrum of the differential matrix, sLED provides a novel perspective that accommodates what we assume to be common, namely sparse and weak signals in gene expression data, and it is closely related with Sparse Principal Component Analysis. We prove that sLED achieves full power asymptotically under mild assumptions, and simulation studies verify that it outperforms other existing procedures under many biologically plausible scenarios. Applying sLED to the largest gene-expression dataset obtained from post-mortem brain tissue from Schizophrenia patients and controls, we provide a novel list of genes implicated in Schizophrenia and reveal intriguing patterns in gene co-expression change for Schizophrenia subjects. We also illustrate that sLED can be generalized to compare other gene-gene "relationship" matrices that are of practical interest, such as the weighted adjacency matrices.
Zhang, Bo; Chen, Zhen; Albert, Paul S
2012-01-01
High-dimensional biomarker data are often collected in epidemiological studies when assessing the association between biomarkers and human disease is of interest. We develop a latent class modeling approach for joint analysis of high-dimensional semicontinuous biomarker data and a binary disease outcome. To model the relationship between complex biomarker expression patterns and disease risk, we use latent risk classes to link the 2 modeling components. We characterize complex biomarker-specific differences through biomarker-specific random effects, so that different biomarkers can have different baseline (low-risk) values as well as different between-class differences. The proposed approach also accommodates data features that are common in environmental toxicology and other biomarker exposure data, including a large number of biomarkers, numerous zero values, and complex mean-variance relationship in the biomarkers levels. A Monte Carlo EM (MCEM) algorithm is proposed for parameter estimation. Both the MCEM algorithm and model selection procedures are shown to work well in simulations and applications. In applying the proposed approach to an epidemiological study that examined the relationship between environmental polychlorinated biphenyl (PCB) exposure and the risk of endometriosis, we identified a highly significant overall effect of PCB concentrations on the risk of endometriosis.
Tao, Chenyang; Nichols, Thomas E; Hua, Xue; Ching, Christopher R K; Rolls, Edmund T; Thompson, Paul M; Feng, Jianfeng
2017-01-01
We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. Copyright © 2016. Published by Elsevier Inc.
Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations
Garrett, Karen A.; Allison, David B.
2015-01-01
Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106
Challenges and approaches to statistical design and inference in high-dimensional investigations.
Gadbury, Gary L; Garrett, Karen A; Allison, David B
2009-01-01
Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.
Tikhonov, Mikhail; Monasson, Remi
2018-01-01
Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.
Yu, Wenbao; Park, Taesung
2014-01-01
Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...
q-deformed phase-space and its lattice structure
International Nuclear Information System (INIS)
Wess, J.
1998-01-01
Quantum groups lead to an algebraic structure that can be realized on quantum spaces. These are non-commutative spaces that inherit a well-defined mathematical structure from the quantum group symmetry. In turn, such quantum spaces can be interpreted as non-commutative configuration spaces for physical systems. We study the non-commutative Euclidean space that is based on the quantum group SO q (3)
High dimensional biological data retrieval optimization with NoSQL technology
2014-01-01
Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data
Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken
2014-03-01
We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer
Taşkin Kaya, Gülşen
2013-10-01
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input
High dimensional biological data retrieval optimization with NoSQL technology.
Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike
2014-01-01
High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating
Penalized estimation for competing risks regression with applications to high-dimensional covariates
DEFF Research Database (Denmark)
Ambrogi, Federico; Scheike, Thomas H.
2016-01-01
of competing events. The direct binomial regression model of Scheike and others (2008. Predicting cumulative incidence probability by direct binomial regression. Biometrika 95: (1), 205-220) is reformulated in a penalized framework to possibly fit a sparse regression model. The developed approach is easily...... Research 19: (1), 29-51), the research regarding competing risks is less developed (Binder and others, 2009. Boosting for high-dimensional time-to-event data with competing risks. Bioinformatics 25: (7), 890-896). The aim of this work is to consider how to do penalized regression in the presence...... implementable using existing high-performance software to do penalized regression. Results from simulation studies are presented together with an application to genomic data when the endpoint is progression-free survival. An R function is provided to perform regularized competing risks regression according...
Energy Technology Data Exchange (ETDEWEB)
Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail [Centre for Quantum Physics, COMSATS Institute of Information Technology, Islamabad (Pakistan); Bougouffa, Smail [Department of Physics, Faculty of Science, Taibah University, PO Box 30002, Madinah (Saudi Arabia)
2010-02-14
We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.
International Nuclear Information System (INIS)
Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail; Bougouffa, Smail
2010-01-01
We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.
Time–energy high-dimensional one-side device-independent quantum key distribution
International Nuclear Information System (INIS)
Bao Hai-Ze; Bao Wan-Su; Wang Yang; Chen Rui-Ke; Ma Hong-Xin; Zhou Chun; Li Hong-Wei
2017-01-01
Compared with full device-independent quantum key distribution (DI-QKD), one-side device-independent QKD (1sDI-QKD) needs fewer requirements, which is much easier to meet. In this paper, by applying recently developed novel time–energy entropic uncertainty relations, we present a time–energy high-dimensional one-side device-independent quantum key distribution (HD-QKD) and provide the security proof against coherent attacks. Besides, we connect the security with the quantum steering. By numerical simulation, we obtain the secret key rate for Alice’s different detection efficiencies. The results show that our protocol can performance much better than the original 1sDI-QKD. Furthermore, we clarify the relation among the secret key rate, Alice’s detection efficiency, and the dispersion coefficient. Finally, we simply analyze its performance in the optical fiber channel. (paper)
A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....
Inference for feature selection using the Lasso with high-dimensional data
DEFF Research Database (Denmark)
Brink-Jensen, Kasper; Ekstrøm, Claus Thorn
2014-01-01
Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...
Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data
Hu, Zongliang; Tong, Tiejun; Genton, Marc G.
2017-01-01
We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.
Characterization of differentially expressed genes using high-dimensional co-expression networks
DEFF Research Database (Denmark)
Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.
2010-01-01
We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
Li, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated
Kernel based methods for accelerated failure time model with ultra-high dimensional data
Directory of Open Access Journals (Sweden)
Jiang Feng
2010-12-01
Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Travnik, Jaden B; Pilarski, Patrick M
2017-07-01
Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.
On spaces of functions of smoothness zero
International Nuclear Information System (INIS)
Besov, Oleg V
2012-01-01
The paper is concerned with the new spaces B-bar p,q 0 of functions of smoothness zero defined on the n-dimensional Euclidean space R n or on a subdomain G of R n . These spaces are compared with the spaces B p,q 0 (R n ) and bmo(R n ). The embedding theorems for Sobolev spaces are refined in terms of the space B-bar p,q 0 with the limiting exponent. Bibliography: 8 titles.
Anthropology in the post-Euclidean State, or from textual to oral anthropology
Directory of Open Access Journals (Sweden)
Antonio Luigi Palmisano
2011-12-01
Full Text Available The actual crisis of anthropology is examined in relation to its wide public success. Anthropology has prospered and the anthropologists have proliferated becoming more specific. But the theoretical debate has come to a halt over the last decades. The article suggests that both the methodology and the form of expression of the ethnographic report have developed and then become crystallized around actual protocols. A critique of the dichotomy Subject/Object, namely the key discussion about the notion of Otherness, is here reexamined as the testimony for an immanent “non-protocolar” character of anthropology. This critique together with the end of anthropology as tekhne, i.e. as protocolar activity, will allow anthropology to go on enhancing many other social and non social sciences. The article discusses the re-definition of anthropology in the context of Daseinanalysis and, therefore the changing relation between man and power, that is between the social actor and the post-Euclidean State in the era of the tekhne.
Chaos of discrete dynamical systems in complete metric spaces
International Nuclear Information System (INIS)
Shi Yuming; Chen Guanrong
2004-01-01
This paper is concerned with chaos of discrete dynamical systems in complete metric spaces. Discrete dynamical systems governed by continuous maps in general complete metric spaces are first discussed, and two criteria of chaos are then established. As a special case, two corresponding criteria of chaos for discrete dynamical systems in compact subsets of metric spaces are obtained. These results have extended and improved the existing relevant results of chaos in finite-dimensional Euclidean spaces
Durato, M. V.; Albano, A. M.; Rapp, P. E.; Nawang, S. A.
2015-06-01
The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test.
International Nuclear Information System (INIS)
Durato, M V; Nawang, S A; Albano, A M; Rapp, P E
2015-01-01
The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test. (paper)
Vogt, Martin; Bajorath, Jürgen
2008-01-01
Bayesian classifiers are increasingly being used to distinguish active from inactive compounds and search large databases for novel active molecules. We introduce an approach to directly combine the contributions of property descriptors and molecular fingerprints in the search for active compounds that is based on a Bayesian framework. Conventionally, property descriptors and fingerprints are used as alternative features for virtual screening methods. Following the approach introduced here, probability distributions of descriptor values and fingerprint bit settings are calculated for active and database molecules and the divergence between the resulting combined distributions is determined as a measure of biological activity. In test calculations on a large number of compound activity classes, this methodology was found to consistently perform better than similarity searching using fingerprints and multiple reference compounds or Bayesian screening calculations using probability distributions calculated only from property descriptors. These findings demonstrate that there is considerable synergy between different types of property descriptors and fingerprints in recognizing diverse structure-activity relationships, at least in the context of Bayesian modeling.
DEFF Research Database (Denmark)
Pham, Ninh Dang; Pagh, Rasmus
2012-01-01
projection-based technique that is able to estimate the angle-based outlier factor for all data points in time near-linear in the size of the data. Also, our approach is suitable to be performed in parallel environment to achieve a parallel speedup. We introduce a theoretical analysis of the quality...... neighbor are deteriorated in high-dimensional data. Following up on the work of Kriegel et al. (KDD '08), we investigate the use of angle-based outlier factor in mining high-dimensional outliers. While their algorithm runs in cubic time (with a quadratic time heuristic), we propose a novel random......Outlier mining in d-dimensional point sets is a fundamental and well studied data mining task due to its variety of applications. Most such applications arise in high-dimensional domains. A bottleneck of existing approaches is that implicit or explicit assessments on concepts of distance or nearest...
Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.
Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel
2011-05-09
Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM
Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.
Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver
2018-02-15
Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data
Hu, Zongliang
2017-10-27
We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.
International Nuclear Information System (INIS)
Snyder, Abigail C.; Jiao, Yu
2010-01-01
Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.
Directory of Open Access Journals (Sweden)
Enkelejda Miho
2018-02-01
Full Text Available The adaptive immune system recognizes antigens via an immense array of antigen-binding antibodies and T-cell receptors, the immune repertoire. The interrogation of immune repertoires is of high relevance for understanding the adaptive immune response in disease and infection (e.g., autoimmunity, cancer, HIV. Adaptive immune receptor repertoire sequencing (AIRR-seq has driven the quantitative and molecular-level profiling of immune repertoires, thereby revealing the high-dimensional complexity of the immune receptor sequence landscape. Several methods for the computational and statistical analysis of large-scale AIRR-seq data have been developed to resolve immune repertoire complexity and to understand the dynamics of adaptive immunity. Here, we review the current research on (i diversity, (ii clustering and network, (iii phylogenetic, and (iv machine learning methods applied to dissect, quantify, and compare the architecture, evolution, and specificity of immune repertoires. We summarize outstanding questions in computational immunology and propose future directions for systems immunology toward coupling AIRR-seq with the computational discovery of immunotherapeutics, vaccines, and immunodiagnostics.
Construction of high-dimensional neural network potentials using environment-dependent atom pairs.
Jose, K V Jovan; Artrith, Nongnuch; Behler, Jörg
2012-05-21
An accurate determination of the potential energy is the crucial step in computer simulations of chemical processes, but using electronic structure methods on-the-fly in molecular dynamics (MD) is computationally too demanding for many systems. Constructing more efficient interatomic potentials becomes intricate with increasing dimensionality of the potential-energy surface (PES), and for numerous systems the accuracy that can be achieved is still not satisfying and far from the reliability of first-principles calculations. Feed-forward neural networks (NNs) have a very flexible functional form, and in recent years they have been shown to be an accurate tool to construct efficient PESs. High-dimensional NN potentials based on environment-dependent atomic energy contributions have been presented for a number of materials. Still, these potentials may be improved by a more detailed structural description, e.g., in form of atom pairs, which directly reflect the atomic interactions and take the chemical environment into account. We present an implementation of an NN method based on atom pairs, and its accuracy and performance are compared to the atom-based NN approach using two very different systems, the methanol molecule and metallic copper. We find that both types of NN potentials provide an excellent description of both PESs, with the pair-based method yielding a slightly higher accuracy making it a competitive alternative for addressing complex systems in MD simulations.
Xia, Yin; Cai, Tianxi; Cai, T Tony
2018-01-01
Motivated by applications in genomics, we consider in this paper global and multiple testing for the comparisons of two high-dimensional linear regression models. A procedure for testing the equality of the two regression vectors globally is proposed and shown to be particularly powerful against sparse alternatives. We then introduce a multiple testing procedure for identifying unequal coordinates while controlling the false discovery rate and false discovery proportion. Theoretical justifications are provided to guarantee the validity of the proposed tests and optimality results are established under sparsity assumptions on the regression coefficients. The proposed testing procedures are easy to implement. Numerical properties of the procedures are investigated through simulation and data analysis. The results show that the proposed tests maintain the desired error rates under the null and have good power under the alternative at moderate sample sizes. The procedures are applied to the Framingham Offspring study to investigate the interactions between smoking and cardiovascular related genetic mutations important for an inflammation marker.
A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem
Directory of Open Access Journals (Sweden)
Zekić-Sušac Marijana
2014-09-01
Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.
Energy Technology Data Exchange (ETDEWEB)
Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer; Michael Pernice; Robert Nourgaliev
2013-05-01
The next generation of methodologies for nuclear reactor Probabilistic Risk Assessment (PRA) explicitly accounts for the time element in modeling the probabilistic system evolution and uses numerical simulation tools to account for possible dependencies between failure events. The Monte-Carlo (MC) and the Dynamic Event Tree (DET) approaches belong to this new class of dynamic PRA methodologies. A challenge of dynamic PRA algorithms is the large amount of data they produce which may be difficult to visualize and analyze in order to extract useful information. We present a software tool that is designed to address these goals. We model a large-scale nuclear simulation dataset as a high-dimensional scalar function defined over a discrete sample of the domain. First, we provide structural analysis of such a function at multiple scales and provide insight into the relationship between the input parameters and the output. Second, we enable exploratory analysis for users, where we help the users to differentiate features from noise through multi-scale analysis on an interactive platform, based on domain knowledge and data characterization. Our analysis is performed by exploiting the topological and geometric properties of the domain, building statistical models based on its topological segmentations and providing interactive visual interfaces to facilitate such explorations. We provide a user’s guide to our software tool by highlighting its analysis and visualization capabilities, along with a use case involving dataset from a nuclear reactor safety simulation.
Schran, Christoph; Uhl, Felix; Behler, Jörg; Marx, Dominik
2018-03-01
The design of accurate helium-solute interaction potentials for the simulation of chemically complex molecules solvated in superfluid helium has long been a cumbersome task due to the rather weak but strongly anisotropic nature of the interactions. We show that this challenge can be met by using a combination of an effective pair potential for the He-He interactions and a flexible high-dimensional neural network potential (NNP) for describing the complex interaction between helium and the solute in a pairwise additive manner. This approach yields an excellent agreement with a mean absolute deviation as small as 0.04 kJ mol-1 for the interaction energy between helium and both hydronium and Zundel cations compared with coupled cluster reference calculations with an energetically converged basis set. The construction and improvement of the potential can be performed in a highly automated way, which opens the door for applications to a variety of reactive molecules to study the effect of solvation on the solute as well as the solute-induced structuring of the solvent. Furthermore, we show that this NNP approach yields very convincing agreement with the coupled cluster reference for properties like many-body spatial and radial distribution functions. This holds for the microsolvation of the protonated water monomer and dimer by a few helium atoms up to their solvation in bulk helium as obtained from path integral simulations at about 1 K.
Multi-Scale Factor Analysis of High-Dimensional Brain Signals
Ting, Chee-Ming
2017-05-18
In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.
Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J
2009-01-01
High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.
Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.
Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.
Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen
2017-12-01
In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.
Multi-SOM: an Algorithm for High-Dimensional, Small Size Datasets
Directory of Open Access Journals (Sweden)
Shen Lu
2013-04-01
Full Text Available Since it takes time to do experiments in bioinformatics, biological datasets are sometimes small but with high dimensionality. From probability theory, in order to discover knowledge from a set of data, we have to have a sufficient number of samples. Otherwise, the error bounds can become too large to be useful. For the SOM (Self- Organizing Map algorithm, the initial map is based on the training data. In order to avoid the bias caused by the insufficient training data, in this paper we present an algorithm, called Multi-SOM. Multi-SOM builds a number of small self-organizing maps, instead of just one big map. Bayesian decision theory is used to make the final decision among similar neurons on different maps. In this way, we can better ensure that we can get a real random initial weight vector set, the map size is less of consideration and errors tend to average out. In our experiments as applied to microarray datasets which are highly intense data composed of genetic related information, the precision of Multi-SOMs is 10.58% greater than SOMs, and its recall is 11.07% greater than SOMs. Thus, the Multi-SOMs algorithm is practical.
The philosophy of space and time
Reichenbach, Hans
1958-01-01
With unusual depth and clarity, the author covers the problem of the foundations of geometry, the theory of time, the theory and consequences of Einstein's relativity including: relations between theory and observations, coordinate definitions, relations between topological and metrical properties of space, the psychological problem of the possibility of a visual intuition of non-Euclidean structures, and many other important topics in modern science and philosophy. While some of the book utilizes mathematics of a somewhat advanced nature, the exposition is so careful and complete that most people familiar with the philosophy of science or some intermediate mathematics will understand the majority of the ideas and problems discussed. Partial contents: I. The Problem of Physical Geometry. Universal and Differential Forces. Visualization of Geometries. Spaces with non-Euclidean Topological Properties. Geometry as a Theory of Relations. II. The Difference between Space and Time. Simultaneity. Time Order. Unreal ...
A 4D spacetime embedded in a 5D pseudo-Euclidean space describing interior of compact stars
Energy Technology Data Exchange (ETDEWEB)
Singh, K.N. [National Defence Academy, Department of Physics, Khadakwasla (India); Murad, Mohammad Hassan [BRAC University, Department of Mathematics and Natural Sciences, Dhaka (Bangladesh); Pant, Neeraj [National Defence Academy, Department of Mathematics, Khadakwasla (India)
2017-02-15
The present paper provides a new model of compact stars satisfying the Karmarkar condition. The model is obtained by assuming a new type of metric potential for g{sub rr} from the condition of embedding class I. The model parameters are obtained accordingly by employing the metric potentials to Einstein's field equations. Our model is free from geometric singularity and satisfies all the physical conditions. The obtained mass and radius of the compact stars Cen X-3, EXO 1785-248 and SAX 1808.4-3658 obtained from the model are consistent with the observational data of T. Gangopadhyay et al. Detailed analyses of these neutron stars (Cen X-3, EXO 1785-248 and SAX 1808.4-3658) are also given with the help of graphical representations. (orig.)
Directory of Open Access Journals (Sweden)
Laurent Berge
2012-01-01
Full Text Available This paper presents the R package HDclassif which is devoted to the clustering and the discriminant analysis of high-dimensional data. The classification methods proposed in the package result from a new parametrization of the Gaussian mixture model which combines the idea of dimension reduction and model constraints on the covariance matrices. The supervised classification method using this parametrization is called high dimensional discriminant analysis (HDDA. In a similar manner, the associated clustering method iscalled high dimensional data clustering (HDDC and uses the expectation-maximization algorithm for inference. In order to correctly t the data, both methods estimate the specific subspace and the intrinsic dimension of the groups. Due to the constraints on the covariance matrices, the number of parameters to estimate is significantly lower than other model-based methods and this allows the methods to be stable and efficient in high dimensions. Two introductory examples illustrated with R codes allow the user to discover the hdda and hddc functions. Experiments on simulated and real datasets also compare HDDC and HDDA with existing classification methods on high-dimensional datasets. HDclassif is a free software and distributed under the general public license, as part of the R software project.
International Nuclear Information System (INIS)
Langrene, Nicolas
2014-01-01
This thesis deals with the numerical solution of general stochastic control problems, with notable applications for electricity markets. We first propose a structural model for the price of electricity, allowing for price spikes well above the marginal fuel price under strained market conditions. This model allows to price and partially hedge electricity derivatives, using fuel forwards as hedging instruments. Then, we propose an algorithm, which combines Monte-Carlo simulations with local basis regressions, to solve general optimal switching problems. A comprehensive rate of convergence of the method is provided. Moreover, we manage to make the algorithm parsimonious in memory (and hence suitable for high dimensional problems) by generalizing to this framework a memory reduction method that avoids the storage of the sample paths. We illustrate this on the problem of investments in new power plants (our structural power price model allowing the new plants to impact the price of electricity). Finally, we study more general stochastic control problems (the control can be continuous and impact the drift and volatility of the state process), the solutions of which belong to the class of fully nonlinear Hamilton-Jacobi-Bellman equations, and can be handled via constrained Backward Stochastic Differential Equations, for which we develop a backward algorithm based on control randomization and parametric optimizations. A rate of convergence between the constraPned BSDE and its discrete version is provided, as well as an estimate of the optimal control. This algorithm is then applied to the problem of super replication of options under uncertain volatilities (and correlations). (author)
Evaluation of a new high-dimensional miRNA profiling platform
Directory of Open Access Journals (Sweden)
Lamblin Anne-Francoise
2009-08-01
Full Text Available Abstract Background MicroRNAs (miRNAs are a class of approximately 22 nucleotide long, widely expressed RNA molecules that play important regulatory roles in eukaryotes. To investigate miRNA function, it is essential that methods to quantify their expression levels be available. Methods We evaluated a new miRNA profiling platform that utilizes Illumina's existing robust DASL chemistry as the basis for the assay. Using total RNA from five colon cancer patients and four cell lines, we evaluated the reproducibility of miRNA expression levels across replicates and with varying amounts of input RNA. The beta test version was comprised of 735 miRNA targets of Illumina's miRNA profiling application. Results Reproducibility between sample replicates within a plate was good (Spearman's correlation 0.91 to 0.98 as was the plate-to-plate reproducibility replicates run on different days (Spearman's correlation 0.84 to 0.98. To determine whether quality data could be obtained from a broad range of input RNA, data obtained from amounts ranging from 25 ng to 800 ng were compared to those obtained at 200 ng. No effect across the range of RNA input was observed. Conclusion These results indicate that very small amounts of starting material are sufficient to allow sensitive miRNA profiling using the Illumina miRNA high-dimensional platform. Nonlinear biases were observed between replicates, indicating the need for abundance-dependent normalization. Overall, the performance characteristics of the Illumina miRNA profiling system were excellent.
Multivariate linear regression of high-dimensional fMRI data with multiple target variables.
Valente, Giancarlo; Castellanos, Agustin Lage; Vanacore, Gianluca; Formisano, Elia
2014-05-01
Multivariate regression is increasingly used to study the relation between fMRI spatial activation patterns and experimental stimuli or behavioral ratings. With linear models, informative brain locations are identified by mapping the model coefficients. This is a central aspect in neuroimaging, as it provides the sought-after link between the activity of neuronal populations and subject's perception, cognition or behavior. Here, we show that mapping of informative brain locations using multivariate linear regression (MLR) may lead to incorrect conclusions and interpretations. MLR algorithms for high dimensional data are designed to deal with targets (stimuli or behavioral ratings, in fMRI) separately, and the predictive map of a model integrates information deriving from both neural activity patterns and experimental design. Not accounting explicitly for the presence of other targets whose associated activity spatially overlaps with the one of interest may lead to predictive maps of troublesome interpretation. We propose a new model that can correctly identify the spatial patterns associated with a target while achieving good generalization. For each target, the training is based on an augmented dataset, which includes all remaining targets. The estimation on such datasets produces both maps and interaction coefficients, which are then used to generalize. The proposed formulation is independent of the regression algorithm employed. We validate this model on simulated fMRI data and on a publicly available dataset. Results indicate that our method achieves high spatial sensitivity and good generalization and that it helps disentangle specific neural effects from interaction with predictive maps associated with other targets. Copyright © 2013 Wiley Periodicals, Inc.
Gomez, Luis J; Yücel, Abdulkadir C; Hernandez-Garcia, Luis; Taylor, Stephan F; Michielssen, Eric
2015-01-01
A computational framework for uncertainty quantification in transcranial magnetic stimulation (TMS) is presented. The framework leverages high-dimensional model representations (HDMRs), which approximate observables (i.e., quantities of interest such as electric (E) fields induced inside targeted cortical regions) via series of iteratively constructed component functions involving only the most significant random variables (i.e., parameters that characterize the uncertainty in a TMS setup such as the position and orientation of TMS coils, as well as the size, shape, and conductivity of the head tissue). The component functions of HDMR expansions are approximated via a multielement probabilistic collocation (ME-PC) method. While approximating each component function, a quasi-static finite-difference simulator is used to compute observables at integration/collocation points dictated by the ME-PC method. The proposed framework requires far fewer simulations than traditional Monte Carlo methods for providing highly accurate statistical information (e.g., the mean and standard deviation) about the observables. The efficiency and accuracy of the proposed framework are demonstrated via its application to the statistical characterization of E-fields generated by TMS inside cortical regions of an MRI-derived realistic head model. Numerical results show that while uncertainties in tissue conductivities have negligible effects on TMS operation, variations in coil position/orientation and brain size significantly affect the induced E-fields. Our numerical results have several implications for the use of TMS during depression therapy: 1) uncertainty in the coil position and orientation may reduce the response rates of patients; 2) practitioners should favor targets on the crest of a gyrus to obtain maximal stimulation; and 3) an increasing scalp-to-cortex distance reduces the magnitude of E-fields on the surface and inside the cortex.
Directory of Open Access Journals (Sweden)
Datta Susmita
2010-08-01
Full Text Available Abstract Background Generally speaking, different classifiers tend to work well for certain types of data and conversely, it is usually not known a priori which algorithm will be optimal in any given classification application. In addition, for most classification problems, selecting the best performing classification algorithm amongst a number of competing algorithms is a difficult task for various reasons. As for example, the order of performance may depend on the performance measure employed for such a comparison. In this work, we present a novel adaptive ensemble classifier constructed by combining bagging and rank aggregation that is capable of adaptively changing its performance depending on the type of data that is being classified. The attractive feature of the proposed classifier is its multi-objective nature where the classification results can be simultaneously optimized with respect to several performance measures, for example, accuracy, sensitivity and specificity. We also show that our somewhat complex strategy has better predictive performance as judged on test samples than a more naive approach that attempts to directly identify the optimal classifier based on the training data performances of the individual classifiers. Results We illustrate the proposed method with two simulated and two real-data examples. In all cases, the ensemble classifier performs at the level of the best individual classifier comprising the ensemble or better. Conclusions For complex high-dimensional datasets resulting from present day high-throughput experiments, it may be wise to consider a number of classification algorithms combined with dimension reduction techniques rather than a fixed standard algorithm set a priori.
Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per
2011-01-01
Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher
From Ambiguities to Insights: Query-based Comparisons of High-Dimensional Data
Kowalski, Jeanne; Talbot, Conover; Tsai, Hua L.; Prasad, Nijaguna; Umbricht, Christopher; Zeiger, Martha A.
2007-11-01
Genomic technologies will revolutionize drag discovery and development; that much is universally agreed upon. The high dimension of data from such technologies has challenged available data analytic methods; that much is apparent. To date, large-scale data repositories have not been utilized in ways that permit their wealth of information to be efficiently processed for knowledge, presumably due in large part to inadequate analytical tools to address numerous comparisons of high-dimensional data. In candidate gene discovery, expression comparisons are often made between two features (e.g., cancerous versus normal), such that the enumeration of outcomes is manageable. With multiple features, the setting becomes more complex, in terms of comparing expression levels of tens of thousands transcripts across hundreds of features. In this case, the number of outcomes, while enumerable, become rapidly large and unmanageable, and scientific inquiries become more abstract, such as "which one of these (compounds, stimuli, etc.) is not like the others?" We develop analytical tools that promote more extensive, efficient, and rigorous utilization of the public data resources generated by the massive support of genomic studies. Our work innovates by enabling access to such metadata with logically formulated scientific inquires that define, compare and integrate query-comparison pair relations for analysis. We demonstrate our computational tool's potential to address an outstanding biomedical informatics issue of identifying reliable molecular markers in thyroid cancer. Our proposed query-based comparison (QBC) facilitates access to and efficient utilization of metadata through logically formed inquires expressed as query-based comparisons by organizing and comparing results from biotechnologies to address applications in biomedicine.
Metrics of a 'mole hole' against the Lobachevsky space background
International Nuclear Information System (INIS)
Tentyukov, M.N.
1994-01-01
'Classical' mole hole are the Euclidean metrics consisting of two large space regions connected by a throat. They are the instanton solutions of the Einstein equations. It is shown that for existence of mole holes in the general relativity theory it is required the energy-momentum tensor breaking energetic conditions. 9 refs., 7 figs
Yu, Wenbao; Park, Taesung
2014-01-01
It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.
From free fields to AdS space: Thermal case
International Nuclear Information System (INIS)
Furuuchi, Kazuyuki
2005-01-01
We analyze the reorganization of free field theory correlators to closed string amplitudes investigated in previous papers in the case of Euclidean thermal field theory and study how the dual bulk geometry is encoded on them. The expectation value of Polyakov loop, which is an order parameter for confinement-deconfinement transition, is directly reflected on the dual bulk geometry. The dual geometry of the confined phase is found to be AdS space periodically identified in Euclidean time direction. The gluing of Schwinger parameters, which is a key step for the reorganization of field theory correlators, works in the same way as in the nonthermal case. In the deconfined phase the gluing is made possible only by taking the dual geometry correctly. The dual geometry for the deconfined phase does not have a noncontractable circle in the Euclidean time direction
Euclidean Dynamical Triangulation revisited: is the phase transition really 1st order?
International Nuclear Information System (INIS)
Rindlisbacher, Tobias; Forcrand, Philippe de
2015-01-01
The transition between the two phases of 4D Euclidean Dynamical Triangulation (http://dx.doi.org/10.1016/0370-2693(92)90709-D) was long believed to be of second order until in 1996 first order behavior was found for sufficiently large systems (http://dx.doi.org/10.1016/0550-3213(96)00214-3, http://dx.doi.org/10.1016/S0370-2693(96)01277-4). However, one may wonder if this finding was affected by the numerical methods used: to control volume fluctuations, in both studies (http://dx.doi.org/10.1016/0550-3213(96)00214-3, http://dx.doi.org/10.1016/S0370-2693(96)01277-4) an artificial harmonic potential was added to the action and in (http://dx.doi.org/10.1016/S0370-2693(96)01277-4) measurements were taken after a fixed number of accepted instead of attempted moves which introduces an additional error. Finally the simulations suffer from strong critical slowing down which may have been underestimated. In the present work, we address the above weaknesses: we allow the volume to fluctuate freely within a fixed interval; we take measurements after a fixed number of attempted moves; and we overcome critical slowing down by using an optimized parallel tempering algorithm (http://dx.doi.org/10.1088/1742-5468/2010/01/P01020). With these improved methods, on systems of size up to N_4=64k 4-simplices, we confirm that the phase transition is 1"s"t order. In addition, we discuss a local criterion to decide whether parts of a triangulation are in the elongated or crumpled state and describe a new correspondence between EDT and the balls in boxes model. The latter gives rise to a modified partition function with an additional, third coupling. Finally, we propose and motivate a class of modified path-integral measures that might remove the metastability of the Markov chain and turn the phase transition into 2"n"d order.
Zhang, Dongwen; Zhu, Qingsong; Xiong, Jing; Wang, Lei
2014-04-27
In a deforming anatomic environment, the motion of an instrument suffers from complex geometrical and dynamic constraints, robot assisted minimally invasive surgery therefore requires more sophisticated skills for surgeons. This paper proposes a novel dynamic virtual fixture (DVF) to enhance the surgical operation accuracy of admittance-type medical robotics in the deforming environment. A framework for DVF on the Euclidean Group SE(3) is presented, which unites rotation and translation in a compact form. First, we constructed the holonomic/non-holonomic constraints, and then searched for the corresponded reference to make a distinction between preferred and non-preferred directions. Second, different control strategies are employed to deal with the tasks along the distinguished directions. The desired spatial compliance matrix is synthesized from an allowable motion screw set to filter out the task unrelated components from manual input, the operator has complete control over the preferred directions; while the relative motion between the surgical instrument and the anatomy structures is actively tracked and cancelled, the deviation relative to the reference is compensated jointly by the operator and DVF controllers. The operator, haptic device, admittance-type proxy and virtual deforming environment are involved in a hardware-in-the-loop experiment, human-robot cooperation with the assistance of DVF controller is carried out on a deforming sphere to simulate beating heart surgery, performance of the proposed DVF on admittance-type proxy is evaluated, and both human factors and control parameters are analyzed. The DVF can improve the dynamic properties of human-robot cooperation in a low-frequency (0 ~ 40 rad/sec) deforming environment, and maintain synergy of orientation and translation during the operation. Statistical analysis reveals that the operator has intuitive control over the preferred directions, human and the DVF controller jointly control the
Gravity mediated Dark Matter models in the de Sitter space
Vancea, Ion V.
2018-01-01
In this paper, we generalize the simplified Dark Matter models with graviton mediator to the curved space-time, in particular to the de Sitter space. We obtain the generating functional of the Green's functions in the Euclidean de Sitter space for the covariant free gravitons. We determine the generating functional of the interacting theory between Dark Matter particles and the covariant gravitons. Also, we calculate explicitly the 2-point and 3-point interacting Green's functions for the sym...
Filaments of Meaning in Word Space
Karlgren, Jussi; Holst, Anders; Sahlgren, Magnus
2008-01-01
Word space models, in the sense of vector space models built on distributional data taken from texts, are used to model semantic relations between words. We argue that the high dimensionality of typical vector space models lead to unintuitive effects on modeling likeness of meaning and that the local structure of word spaces is where interesting semantic relations reside. We show that the local structure of word spaces has substantially different dimensionality and character than the global s...
The Perspective Structure of Visual Space
2015-01-01
Luneburg’s model has been the reference for experimental studies of visual space for almost seventy years. His claim for a curved visual space has been a source of inspiration for visual scientists as well as philosophers. The conclusion of many experimental studies has been that Luneburg’s model does not describe visual space in various tasks and conditions. Remarkably, no alternative model has been suggested. The current study explores perspective transformations of Euclidean space as a model for visual space. Computations show that the geometry of perspective spaces is considerably different from that of Euclidean space. Collinearity but not parallelism is preserved in perspective space and angles are not invariant under translation and rotation. Similar relationships have shown to be properties of visual space. Alley experiments performed early in the nineteenth century have been instrumental in hypothesizing curved visual spaces. Alleys were computed in perspective space and compared with reconstructed alleys of Blumenfeld. Parallel alleys were accurately described by perspective geometry. Accurate distance alleys were derived from parallel alleys by adjusting the interstimulus distances according to the size-distance invariance hypothesis. Agreement between computed and experimental alleys and accommodation of experimental results that rejected Luneburg’s model show that perspective space is an appropriate model for how we perceive orientations and angles. The model is also appropriate for perceived distance ratios between stimuli but fails to predict perceived distances. PMID:27648222
Gómez, Daviel; Hernández, L Ázaro; Yabor, Lourdes; Beemster, Gerrit T S; Tebbe, Christoph C; Papenbrock, Jutta; Lorenzo, José Carlos
2018-03-15
Plant scientists usually record several indicators in their abiotic factor experiments. The common statistical management involves univariate analyses. Such analyses generally create a split picture of the effects of experimental treatments since each indicator is addressed independently. The Euclidean distance combined with the information of the control treatment could have potential as an integrating indicator. The Euclidean distance has demonstrated its usefulness in many scientific fields but, as far as we know, it has not yet been employed for plant experimental analyses. To exemplify the use of the Euclidean distance in this field, we performed an experiment focused on the effects of mannitol on sugarcane micropropagation in temporary immersion bioreactors. Five mannitol concentrations were compared: 0, 50, 100, 150 and 200 mM. As dependent variables we recorded shoot multiplication rate, fresh weight, and levels of aldehydes, chlorophylls, carotenoids and phenolics. The statistical protocol which we then carried out integrated all dependent variables to easily identify the mannitol concentration that produced the most remarkable integral effect. Results provided by the Euclidean distance demonstrate a gradually increasing distance from the control in function of increasing mannitol concentrations. 200 mM mannitol caused the most significant alteration of sugarcane biochemistry and physiology under the experimental conditions described here. This treatment showed the longest statistically significant Euclidean distance to the control treatment (2.38). In contrast, 50 and 100 mM mannitol showed the lowest Euclidean distances (0.61 and 0.84, respectively) and thus poor integrated effects of mannitol. The analysis shown here indicates that the use of the Euclidean distance can contribute to establishing a more integrated evaluation of the contrasting mannitol treatments.
Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.
2012-01-01
The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data)...
Integrating high dimensional bi-directional parsing models for gene mention tagging.
Hsu, Chun-Nan; Chang, Yu-Ming; Kuo, Cheng-Ju; Lin, Yu-Shi; Huang, Han-Shen; Chung, I-Fang
2008-07-01
Tagging gene and gene product mentions in scientific text is an important initial step of literature mining. In this article, we describe in detail our gene mention tagger participated in BioCreative 2 challenge and analyze what contributes to its good performance. Our tagger is based on the conditional random fields model (CRF), the most prevailing method for the gene mention tagging task in BioCreative 2. Our tagger is interesting because it accomplished the highest F-scores among CRF-based methods and second over all. Moreover, we obtained our results by mostly applying open source packages, making it easy to duplicate our results. We first describe in detail how we developed our CRF-based tagger. We designed a very high dimensional feature set that includes most of information that may be relevant. We trained bi-directional CRF models with the same set of features, one applies forward parsing and the other backward, and integrated two models based on the output scores and dictionary filtering. One of the most prominent factors that contributes to the good performance of our tagger is the integration of an additional backward parsing model. However, from the definition of CRF, it appears that a CRF model is symmetric and bi-directional parsing models will produce the same results. We show that due to different feature settings, a CRF model can be asymmetric and the feature setting for our tagger in BioCreative 2 not only produces different results but also gives backward parsing models slight but constant advantage over forward parsing model. To fully explore the potential of integrating bi-directional parsing models, we applied different asymmetric feature settings to generate many bi-directional parsing models and integrate them based on the output scores. Experimental results show that this integrated model can achieve even higher F-score solely based on the training corpus for gene mention tagging. Data sets, programs and an on-line service of our gene
Greedy algorithms for high-dimensional non-symmetric linear problems***
Directory of Open Access Journals (Sweden)
Cancès E.
2013-12-01
Full Text Available In this article, we present a family of numerical approaches to solve high-dimensional linear non-symmetric problems. The principle of these methods is to approximate a function which depends on a large number of variates by a sum of tensor product functions, each term of which is iteratively computed via a greedy algorithm ? . There exists a good theoretical framework for these methods in the case of (linear and nonlinear symmetric elliptic problems. However, the convergence results are not valid any more as soon as the problems under consideration are not symmetric. We present here a review of the main algorithms proposed in the literature to circumvent this difficulty, together with some new approaches. The theoretical convergence results and the practical implementation of these algorithms are discussed. Their behaviors are illustrated through some numerical examples. Dans cet article, nous présentons une famille de méthodes numériques pour résoudre des problèmes linéaires non symétriques en grande dimension. Le principe de ces approches est de représenter une fonction dépendant d’un grand nombre de variables sous la forme d’une somme de fonctions produit tensoriel, dont chaque terme est calculé itérativement via un algorithme glouton ? . Ces méthodes possèdent de bonnes propriétés théoriques dans le cas de problèmes elliptiques symétriques (linéaires ou non linéaires, mais celles-ci ne sont plus valables dès lors que les problèmes considérés ne sont plus symétriques. Nous présentons une revue des principaux algorithmes proposés dans la littérature pour contourner cette difficulté ainsi que de nouvelles approches que nous proposons. Les résultats de convergence théoriques et la mise en oeuvre pratique de ces algorithmes sont détaillés et leur comportement est illustré au travers d’exemples numériques.
Ait-Haddou, Rachid
2015-06-04
We show that the best degree reduction of a given polynomial P from degree n to m with respect to the discrete (Formula presented.)-norm is equivalent to the best Euclidean distance of the vector of h-Bézier coefficients of P from the vector of degree raised h-Bézier coefficients of polynomials of degree m. Moreover, we demonstrate the adequacy of h-Bézier curves for approaching the problem of weighted discrete least squares approximation. Applications to discrete orthogonal polynomials are also presented. © 2015 Springer Science+Business Media Dordrecht