WorldWideScience

Sample records for high dimensional vector

  1. Oracle Inequalities for High Dimensional Vector Autoregressions

    DEFF Research Database (Denmark)

    Callot, Laurent; Kock, Anders Bredahl

    This paper establishes non-asymptotic oracle inequalities for the prediction error and estimation accuracy of the LASSO in stationary vector autoregressive models. These inequalities are used to establish consistency of the LASSO even when the number of parameters is of a much larger order...

  2. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  3. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  4. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  5. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang; Tong, Tiejun; Genton, Marc G.

    2017-01-01

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  6. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm `boundary separated checkerboard sweep method` appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it`s similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  7. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    International Nuclear Information System (INIS)

    Okumura, Keisuke

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm 'boundary separated checkerboard sweep method' appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it's similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  8. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang

    2017-10-27

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  9. New Multigrid Method Including Elimination Algolithm Based on High-Order Vector Finite Elements in Three Dimensional Magnetostatic Field Analysis

    Science.gov (United States)

    Hano, Mitsuo; Hotta, Masashi

    A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

  10. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    International Nuclear Information System (INIS)

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  11. A structural modification of the two dimensional fuel behaviour analysis code FEMAXI-III with high-speed vectorized operation

    International Nuclear Information System (INIS)

    Yanagisawa, Kazuaki; Ishiguro, Misako; Yamazaki, Takashi; Tokunaga, Yasuo.

    1985-02-01

    Though the two-dimensional fuel behaviour analysis code FEMAXI-III has been developed by JAERI in form of optimized scalar computer code, the call for more efficient code usage generally arized from the recent trends like high burn-up and load follow operation asks the code into further modification stage. A principal aim of the modification is to transform the already implemented scalar type subroutines into vectorized forms to make the programme structure efficiently run on high-speed vector computers. The effort of such structural modification has been finished on a fair way to success. The benchmarking two tests subsequently performed to examine the effect of the modification led us the following concluding remarks: (1) In the first benchmark test, comparatively high-burned three fuel rods that have been irradiated in HBWR, BWR, and PWR condition are prepared. With respect to all cases, a net computing time consumed in the vectorized FEMAXI is approximately 50 % less than that consumed in the original one. (2) In the second benchmark test, a total of 26 PWR fuel rods that have been irradiated in the burn-up ranges of 13-30 MWd/kgU and subsequently power ramped in R2 reactor, Sweden is prepared. In this case the code is purposed to be used for making an envelop of PCI-failure threshold through 26 times code runs. Before coming to the same conclusion, the vectorized FEMAXI-III consumed a net computing time 18 min., while the original FEMAXI-III consumed a computing time 36 min. respectively. (3) The effects obtained from such structural modification are found to be significantly attributed to saving a net computing time in a mechanical calculation in the vectorized FEMAXI-III code. (author)

  12. Multi-perspective views of students’ difficulties with one-dimensional vector and two-dimensional vector

    Science.gov (United States)

    Fauzi, Ahmad; Ratna Kawuri, Kunthi; Pratiwi, Retno

    2017-01-01

    Researchers of students’ conceptual change usually collects data from written tests and interviews. Moreover, reports of conceptual change often simply refer to changes in concepts, such as on a test, without any identification of the learning processes that have taken place. Research has shown that students have difficulties with vectors in university introductory physics courses and high school physics courses. In this study, we intended to explore students’ understanding of one-dimensional and two-dimensional vector in multi perspective views. In this research, we explore students’ understanding through test perspective and interviews perspective. Our research study adopted the mixed-methodology design. The participants of this research were sixty students of third semester of physics education department. The data of this research were collected by testand interviews. In this study, we divided the students’ understanding of one-dimensional vector and two-dimensional vector in two categories, namely vector skills of the addition of one-dimensionaland two-dimensional vector and the relation between vector skills and conceptual understanding. From the investigation, only 44% of students provided correct answer for vector skills of the addition of one-dimensional and two-dimensional vector and only 27% students provided correct answer for the relation between vector skills and conceptual understanding.

  13. A new test for the mean vector in high-dimensional data

    Directory of Open Access Journals (Sweden)

    Knavoot Jiamwattanapong

    2015-08-01

    Full Text Available For the testing of the mean vector where the data are drawn from a multivariate normal population, the renowned Hotelling’s T 2 test is no longer valid when the dimension of the data equals or exceeds the sample size. In this study, we consider the problem of testing the hypothesis H :μ 0  and propose a new test based on the idea of keeping more information from the sample covariance matrix. The development of the statistic is based on Hotelling’s T 2 distribution and the new test has invariance property under a group of scalar transformation. The asymptotic distribution is derived under the null hypothesis. The simulation results show that the proposed test performs well and is more powerful when the data dimension increases for a given sample size. An analysis of DNA microarray data with the new test is demonstrated.

  14. Vector (two-dimensional) magnetic phenomena

    International Nuclear Information System (INIS)

    Enokizono, Masato

    2002-01-01

    In this paper, some interesting phenomena were described from the viewpoint of two-dimensional magnetic property, which is reworded with the vector magnetic property. It shows imperfection of conventional magnetic property and some interested phenomena were discovered, too. We found magnetic materials had the strong nonlinearity both magnitude and spatial phase due to the relationship between the magnetic field strength H-vector and the magnetic flux density B-vector. Therefore, magnetic properties should be defined as the vector relationship. Furthermore, the new Barukhausen signal was observed under rotating flux. (Author)

  15. On the existence of n-dimensional indecomposable vector bundles

    International Nuclear Information System (INIS)

    Tan Xiaojiang.

    1991-09-01

    Let X be an arbitrary smooth irreducible complex projective curve of genus g with g ≥ 4. In this paper we extend the existence theorem of special divisors to high dimensional indecomposable vector bundles. We give a necessary and sufficient condition on the existence of n-dimensional indecomposable vector bundles E with deg(E) = d, dimH 0 (X,E) ≥ h. We also determine under what condition the set of all such vector bundles will be finite and how many elements it contains. (author). 9 refs

  16. Inverse Operation of Four-dimensional Vector Matrix

    OpenAIRE

    H J Bao; A J Sang; H X Chen

    2011-01-01

    This is a new series of study to define and prove multidimensional vector matrix mathematics, which includes four-dimensional vector matrix determinant, four-dimensional vector matrix inverse and related properties. There are innovative concepts of multi-dimensional vector matrix mathematics created by authors with numerous applications in engineering, math, video conferencing, 3D TV, and other fields.

  17. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  18. Vectorization of three-dimensional neutron diffusion code CITATION

    International Nuclear Information System (INIS)

    Harada, Hiroo; Ishiguro, Misako

    1985-01-01

    Three-dimensional multi-group neutron diffusion code CITATION has been widely used for reactor criticality calculations. The code is expected to be run at a high speed by using recent vector supercomputers, when it is appropriately vectorized. In this paper, vectorization methods and their effects are described for the CITATION code. Especially, calculation algorithms suited for vectorization of the inner-outer iterative calculations which spend most of the computing time are discussed. The SLOR method, which is used in the original CITATION code, and the SOR method, which is adopted in the revised code, are vectorized by odd-even mesh ordering. The vectorized CITATION code is executed on the FACOM VP-100 and VP-200 computers, and is found to run over six times faster than the original code for a practical-scale problem. The initial value of the relaxation factor and the number of inner-iterations given as input data are also investigated since the computing time depends on these values. (author)

  19. High-Dimensional Metrics in R

    OpenAIRE

    Chernozhukov, Victor; Hansen, Chris; Spindler, Martin

    2016-01-01

    The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...

  20. Vectorized Matlab Codes for Linear Two-Dimensional Elasticity

    Directory of Open Access Journals (Sweden)

    Jonas Koko

    2007-01-01

    Full Text Available A vectorized Matlab implementation for the linear finite element is provided for the two-dimensional linear elasticity with mixed boundary conditions. Vectorization means that there is no loop over triangles. Numerical experiments show that our implementation is more efficient than the standard implementation with a loop over all triangles.

  1. Two-dimensional gauge model with vector U(1) and axial-vector U(1) symmetries

    International Nuclear Information System (INIS)

    Watabiki, Y.

    1989-01-01

    We have succeeded in constructing a two-dimensional gauge model with both vector U(1) and axial-vector U(1) symmetries. This model is exactly solvable. The Schwinger term vanishes in this model as a consequence of the above symmetries, and negative-norm states appear. However, the norms of physical states are always positive semidefinite due to the gauge symmetries

  2. Vector current scattering in two dimensional quantum chromodynamics

    International Nuclear Information System (INIS)

    Fleishon, N.L.

    1979-04-01

    The interaction of vector currents with hadrons is considered in a two dimensional SU(N) color gauge theory coupled to fermions in leading order in an N -1 expansion. After giving a detailed review of the model, various transition matrix elements of one and two vector currents between hadronic states were considered. A pattern is established whereby the low mass currents interact via meson dominance and the highly virtual currents interact via bare quark-current couplings. This pattern is especially evident in the hadronic contribution to inelastic Compton scattering, M/sub μν/ = ∫ dx e/sup iq.x/ , which is investigated in various kinematic limits. It is shown that in the dual Regge region of soft processes the currents interact as purely hadronic systems. Modification of dimensional counting rules is indicated by a study of a large angle scattering analog. In several hard inclusive nonlight cone processes, parton model ideas are confirmed. The impulse approximation is valid in a Bjorken--Paschos-like limit with very virtual currents. A Drell--Yan type annihilation mechanism is found in photoproduction of massive lepton pairs, leading to identification of a parton wave function for the current. 56 references

  3. Clustering high dimensional data

    DEFF Research Database (Denmark)

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  4. Low Dimensional Representation of Fisher Vectors for Microscopy Image Classification.

    Science.gov (United States)

    Song, Yang; Li, Qing; Huang, Heng; Feng, Dagan; Chen, Mei; Cai, Weidong

    2017-08-01

    Microscopy image classification is important in various biomedical applications, such as cancer subtype identification, and protein localization for high content screening. To achieve automated and effective microscopy image classification, the representative and discriminative capability of image feature descriptors is essential. To this end, in this paper, we propose a new feature representation algorithm to facilitate automated microscopy image classification. In particular, we incorporate Fisher vector (FV) encoding with multiple types of local features that are handcrafted or learned, and we design a separation-guided dimension reduction method to reduce the descriptor dimension while increasing its discriminative capability. Our method is evaluated on four publicly available microscopy image data sets of different imaging types and applications, including the UCSB breast cancer data set, MICCAI 2015 CBTC challenge data set, and IICBU malignant lymphoma, and RNAi data sets. Our experimental results demonstrate the advantage of the proposed low-dimensional FV representation, showing consistent performance improvement over the existing state of the art and the commonly used dimension reduction techniques.

  5. High Accuracy Vector Helium Magnetometer

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed HAVHM instrument is a laser-pumped helium magnetometer with both triaxial vector and omnidirectional scalar measurement capabilities in a single...

  6. Additional neutral vector boson in the 7-dimensional theory of gravy-electro-weak interactions

    International Nuclear Information System (INIS)

    Gavrilov, V.R.

    1988-01-01

    Possibilities of manifestation of an additional neutron vector boson, the existence of which is predicted by the 7-dimensional theory of gravy-electro-weak interactions, are analyzed. A particular case of muon neutrino scattering on a muon is considered. In this case additional neutral current manifests both at high and at relatively low energies of particle collisions

  7. Vector Boson Scattering at High Mass

    CERN Document Server

    Sherwood, P

    2009-01-01

    In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate W W scalar and vector resonances, W Z vector resonances and a Z Z scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons.

  8. Desingularization strategies for three-dimensional vector fields

    CERN Document Server

    Torres, Felipe Cano

    1987-01-01

    For a vector field #3, where Ai are series in X, the algebraic multiplicity measures the singularity at the origin. In this research monograph several strategies are given to make the algebraic multiplicity of a three-dimensional vector field decrease, by means of permissible blowing-ups of the ambient space, i.e. transformations of the type xi=x'ix1, 2s. A logarithmic point of view is taken, marking the exceptional divisor of each blowing-up and by considering only the vector fields which are tangent to this divisor, instead of the whole tangent sheaf. The first part of the book is devoted to the logarithmic background and to the permissible blowing-ups. The main part corresponds to the control of the algorithms for the desingularization strategies by means of numerical invariants inspired by Hironaka's characteristic polygon. Only basic knowledge of local algebra and algebraic geometry is assumed of the reader. The pathologies we find in the reduction of vector fields are analogous to pathologies in the pro...

  9. Vector Boson Scattering at High Mass

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate $WW $scalar and vector resonances, $WZ$ vector resonances and a $ZZ$ scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application of forward jet tagging and to the reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons. The performances of different jet algorithms are compared. We find that resonances in vector boson scattering can be discovered with a few tens of inverse femtobarns of integrated luminosity.

  10. The curvature and the algebra of Killing vectors in five-dimensional space

    International Nuclear Information System (INIS)

    Rcheulishvili, G.

    1990-12-01

    This paper presents the Killing vectors for a five-dimensional space with the line element. The algebras which are formed by these vectors are written down. The curvature two-forms are described. (author). 10 refs

  11. Semilogarithmic Nonuniform Vector Quantization of Two-Dimensional Laplacean Source for Small Variance Dynamics

    Directory of Open Access Journals (Sweden)

    Z. Peric

    2012-04-01

    Full Text Available In this paper high dynamic range nonuniform two-dimensional vector quantization model for Laplacean source was provided. Semilogarithmic A-law compression characteristic was used as radial scalar compression characteristic of two-dimensional vector quantization. Optimal number value of concentric quantization domains (amplitude levels is expressed in the function of parameter A. Exact distortion analysis with obtained closed form expressions is provided. It has been shown that proposed model provides high SQNR values in wide range of variances, and overachieves quality obtained by scalar A-law quantization at same bit rate, so it can be used in various switching and adaptation implementations for realization of high quality signal compression.

  12. High dimensional entanglement

    CSIR Research Space (South Africa)

    Mc

    2012-07-01

    Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...

  13. Three dimensional magnetic fields in extra high speed modified Lundell alternators computed by a combined vector-scalar magnetic potential finite element method

    Science.gov (United States)

    Demerdash, N. A.; Wang, R.; Secunde, R.

    1992-01-01

    A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.

  14. Vector Casimir effect for a D-dimensional sphere

    International Nuclear Information System (INIS)

    Milton, K.A.

    1997-01-01

    The Casimir energy or stress due to modes in a D-dimensional volume subject to TM (mixed) boundary conditions on a bounding spherical surface is calculated. Both interior and exterior modes are included. Together with earlier results found for scalar modes (TE modes), this gives the Casimir effect for fluctuating open-quotes electromagneticclose quotes (vector) fields inside and outside a spherical shell. Known results for three dimensions, first found by Boyer, are reproduced. Qualitatively, the results for TM modes are similar to those for scalar modes: Poles occur in the stress at positive even dimensions, and cusps (logarithmic singularities) occur for integer dimensions D≤1. Particular attention is given the interesting case of D=2. copyright 1997 The American Physical Society

  15. Mining High-Dimensional Data

    Science.gov (United States)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  16. An evaluation method of cross-type H-coil angle for accurate two-dimensional vector magnetic measurement

    International Nuclear Information System (INIS)

    Maeda, Yoshitaka; Todaka, Takashi; Shimoji, Hiroyasu; Enokizono, Masato; Sievert, Johanes

    2006-01-01

    Recently, two-dimensional vector magnetic measurement has become popular and many researchers concerned with this field have attracted to develop more accurate measuring systems and standard measurement systems. Because the two-dimensional vector magnetic property is the relationship between the magnetic flux density vector B and the magnetic field strength vector H , the most important parameter is those components. For the accurate measurement of the field strength vector, we have developed an evaluation apparatus, which consists of a standard solenoid coil and a high-precision turntable. Angle errors of a double H-coil (a cross-type H-coil), which is wound one after the other around a former, can be evaluated with this apparatus. The magnetic field strength is compensated with the measured angle error

  17. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  18. Vector calculus in non-integer dimensional space and its applications to fractal media

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-02-01

    We suggest a generalization of vector calculus for the case of non-integer dimensional space. The first and second orders operations such as gradient, divergence, the scalar and vector Laplace operators for non-integer dimensional space are defined. For simplification we consider scalar and vector fields that are independent of angles. We formulate a generalization of vector calculus for rotationally covariant scalar and vector functions. This generalization allows us to describe fractal media and materials in the framework of continuum models with non-integer dimensional space. As examples of application of the suggested calculus, we consider elasticity of fractal materials (fractal hollow ball and fractal cylindrical pipe with pressure inside and outside), steady distribution of heat in fractal media, electric field of fractal charged cylinder. We solve the correspondent equations for non-integer dimensional space models.

  19. A static investigation of yaw vectoring concepts on two-dimensional convergent-divergent nozzles

    Science.gov (United States)

    Berrier, B. L.; Mason, M. L.

    1983-01-01

    The flow-turning capability and nozzle internal performance of yaw-vectoring nozzle geometries were tested in the NASA Langley 16-ft Transonic wind tunnel. The concept was investigated as a means of enhancing fighter jet performance. Five two-dimensional convergent-divergent nozzles were equipped for yaw-vectoring and examined. The configurations included a translating left sidewall, left and right sidewall flaps downstream of the nozzle throat, left sidewall flaps or port located upstream of the nozzle throat, and a powered rudder. Trials were also run with 20 deg of pitch thrust vectoring added. The feasibility of providing yaw-thrust vectoring was demonstrated, with the largest yaw vector angles being obtained with sidewall flaps downstream of the nozzle primary throat. It was concluded that yaw vector designs that scoop or capture internal nozzle flow provide the largest yaw-vector capability, but decrease the thrust the most.

  20. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    Science.gov (United States)

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  1. The algebra of Killing vectors in five-dimensional space

    International Nuclear Information System (INIS)

    Rcheulishvili, G.L.

    1990-01-01

    This paper presents algebras which are formed by the found earlier Killing vectors in the space with linear element ds. Under some conditions, an explicit dependence of r is given for the functions entering in linear element ds. The curvature two-forms are described. 7 refs

  2. Three dimensional (3d) transverse oscillation vector velocity ultrasound imaging

    DEFF Research Database (Denmark)

    2013-01-01

    as to traverse a field of view, and receive circuitry (306) configured to receive a two dimensional set of echoes produced in response to the ultrasound signal traversing structure in the field of view, wherein the structure includes flowing structures such as flowing blood cells, organ cells etc. A beamformer...

  3. Absolute continuity of autophage measures on finite-dimensional vector spaces

    Energy Technology Data Exchange (ETDEWEB)

    Raja, C R.E. [Stat-Math Unit, Indian Statistical Institute, Bangalore (India); [Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)]. E-mail: creraja@isibang.ac.in

    2002-06-01

    We consider a class of measures called autophage which was introduced and studied by Szekely for measures on the real line. We show that the autophage measures on finite-dimensional vector spaces over real or Q{sub p} are infinitely divisible without idempotent factors and are absolutely continuous with bounded continuous density. We also show that certain semistable measures on such vector spaces are absolutely continuous. (author)

  4. High frequency vibration analysis by the complex envelope vectorization.

    Science.gov (United States)

    Giannini, O; Carcaterra, A; Sestieri, A

    2007-06-01

    The complex envelope displacement analysis (CEDA) is a procedure to solve high frequency vibration and vibro-acoustic problems, providing the envelope of the physical solution. CEDA is based on a variable transformation mapping the high frequency oscillations into signals of low frequency content and has been successfully applied to one-dimensional systems. However, the extension to plates and vibro-acoustic fields met serious difficulties so that a general revision of the theory was carried out, leading finally to a new method, the complex envelope vectorization (CEV). In this paper the CEV method is described, underlying merits and limits of the procedure, and a set of applications to vibration and vibro-acoustic problems of increasing complexity are presented.

  5. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  6. Eruptive Massive Vector Particles of 5-Dimensional Kerr-Gödel Spacetime

    Science.gov (United States)

    Övgün, A.; Sakalli, I.

    2018-02-01

    In this paper, we investigate Hawking radiation of massive spin-1 particles from 5-dimensional Kerr-Gödel spacetime. By applying the WKB approximation and the Hamilton-Jacobi ansatz to the relativistic Proca equation, we obtain the quantum tunneling rate of the massive vector particles. Using the obtained tunneling rate, we show how one impeccably computes the Hawking temperature of the 5-dimensional Kerr-Gödel spacetime.

  7. Spatial optical (2+1)-dimensional scalar- and vector-solitons in saturable nonlinear media

    Energy Technology Data Exchange (ETDEWEB)

    Weilnau, C.; Traeger, D.; Schroeder, J.; Denz, C. [Institute of Applied Physics, Westfaelische Wilhelms-Universitaet Muenster, Corrensstr. 2/4, 48149 Muenster (Germany); Ahles, M.; Petter, J. [Institute of Applied Physics, Technische Universitaet Darmstadt, Hochschulstr. 6, 64289 Darmstadt (Germany)

    2002-10-01

    (2+1)-dimensional optical spatial solitons have become a major field of research in nonlinear physics throughout the last decade due to their potential in adaptive optical communication technologies. With the help of photorefractive crystals that supply the required type of nonlinearity for soliton generation, we are able to demonstrate experimentally the formation, the dynamic properties, and especially the interaction of solitary waves, which were so far only known from general soliton theory. Among the complex interaction scenarios of scalar solitons, we reveal a distinct behavior denoted as anomalous interaction, which is unique in soliton-supporting systems. Further on, we realize highly parallel, light-induced waveguide configurations based on photorefractive screening solitons that give rise to technical applications towards waveguide couplers and dividers as well as all-optical information processing devices where light is controlled by light itself. Finally, we demonstrate the generation, stability and propagation dynamics of multi-component or vector solitons, multipole transverse optical structures bearing a complex geometry. In analogy to the particle-light dualism of scalar solitons, various types of vector solitons can - in a broader sense - be interpreted as molecules of light. (Abstract Copyright [2002], Wiley Periodicals, Inc.)

  8. Spatial optical (2+1)-dimensional scalar- and vector-solitons in saturable nonlinear media

    International Nuclear Information System (INIS)

    Weilnau, C.; Traeger, D.; Schroeder, J.; Denz, C.; Ahles, M.; Petter, J.

    2002-01-01

    (2+1)-dimensional optical spatial solitons have become a major field of research in nonlinear physics throughout the last decade due to their potential in adaptive optical communication technologies. With the help of photorefractive crystals that supply the required type of nonlinearity for soliton generation, we are able to demonstrate experimentally the formation, the dynamic properties, and especially the interaction of solitary waves, which were so far only known from general soliton theory. Among the complex interaction scenarios of scalar solitons, we reveal a distinct behavior denoted as anomalous interaction, which is unique in soliton-supporting systems. Further on, we realize highly parallel, light-induced waveguide configurations based on photorefractive screening solitons that give rise to technical applications towards waveguide couplers and dividers as well as all-optical information processing devices where light is controlled by light itself. Finally, we demonstrate the generation, stability and propagation dynamics of multi-component or vector solitons, multipole transverse optical structures bearing a complex geometry. In analogy to the particle-light dualism of scalar solitons, various types of vector solitons can - in a broader sense - be interpreted as molecules of light. (Abstract Copyright [2002], Wiley Periodicals, Inc.)

  9. New techniques for the scientific visualization of three-dimensional multi-variate and vector fields

    Energy Technology Data Exchange (ETDEWEB)

    Crawfis, Roger A. [Univ. of California, Davis, CA (United States)

    1995-10-01

    Volume rendering allows us to represent a density cloud with ideal properties (single scattering, no self-shadowing, etc.). Scientific visualization utilizes this technique by mapping an abstract variable or property in a computer simulation to a synthetic density cloud. This thesis extends volume rendering from its limitation of isotropic density clouds to anisotropic and/or noisy density clouds. Design aspects of these techniques are discussed that aid in the comprehension of scientific information. Anisotropic volume rendering is used to represent vector based quantities in scientific visualization. Velocity and vorticity in a fluid flow, electric and magnetic waves in an electromagnetic simulation, and blood flow within the body are examples of vector based information within a computer simulation or gathered from instrumentation. Understand these fields can be crucial to understanding the overall physics or physiology. Three techniques for representing three-dimensional vector fields are presented: Line Bundles, Textured Splats and Hair Splats. These techniques are aimed at providing a high-level (qualitative) overview of the flows, offering the user a substantial amount of information with a single image or animation. Non-homogenous volume rendering is used to represent multiple variables. Computer simulations can typically have over thirty variables, which describe properties whose understanding are useful to the scientist. Trying to understand each of these separately can be time consuming. Trying to understand any cause and effect relationships between different variables can be impossible. NoiseSplats is introduced to represent two or more properties in a single volume rendering of the data. This technique is also aimed at providing a qualitative overview of the flows.

  10. String vacuum backgrounds with covariantly constant null Killing vector and two-dimensional quantum gravity

    International Nuclear Information System (INIS)

    Tseytlin, A.A.

    1993-01-01

    We consider a two-dimensional sigma model with a (2+N)-dimensional Minkowski signature target space metric having a covariantly constant null Killing vector. We study solutions of the conformal invariance conditions in 2+N dimensions and find that generic solutions can be represented in terms of the RG flow in N-dimensional 'transverse space' theory. The resulting conformal invariant sigma model is interpreted as a quantum action of the two-dimensional scalar ('dilaton') quantum gravity model coupled to a (non-conformal) 'transverse' sigma model. The conformal factor of the two-dimensional metric is identified with a light-cone coordinate of the (2+N)-dimensional sigma model. We also discuss the case when the transverse theory is conformal (with or without the antisymmetric tensor background) and reproduce in a systematic way the solutions with flat transverse space known before. (orig.)

  11. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan

    2017-03-27

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  12. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert; Ombao, Hernando

    2017-01-01

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  13. Suggested Courseware for the Non-Calculus Physics Student: Measurement, Vectors, and One-Dimensional Motion.

    Science.gov (United States)

    Mahoney, Joyce; And Others

    1988-01-01

    Evaluates 16 commercially available courseware packages covering topics for introductory physics. Discusses the price, sub-topics, program type, interaction, time, calculus required, graphics, and comments of each program. Recommends two packages in measurement and vectors, and one-dimensional motion respectively. (YP)

  14. Codimension-one tangency bifurcations of global Poincare maps of four-dimensional vector fields

    NARCIS (Netherlands)

    Krauskopf, B.; Lee, C.M.; Osinga, H.M.

    2009-01-01

    When one considers a Poincarreturn map on a general unbounded (n - 1)-dimensional section for a vector field in R-n there are typically points where the flow is tangent to the section. The only notable exception is when the system is (equivalent to) a periodically forced system. The tangencies can

  15. Higher-dimensional generalizations of the Watanabe–Strogatz transform for vector models of synchronization

    Science.gov (United States)

    Lohe, M. A.

    2018-06-01

    We generalize the Watanabe–Strogatz (WS) transform, which acts on the Kuramoto model in d  =  2 dimensions, to a higher-dimensional vector transform which operates on vector oscillator models of synchronization in any dimension , for the case of identical frequency matrices. These models have conserved quantities constructed from the cross ratios of inner products of the vector variables, which are invariant under the vector transform, and have trajectories which lie on the unit sphere S d‑1. Application of the vector transform leads to a partial integration of the equations of motion, leaving independent equations to be solved, for any number of nodes N. We discuss properties of complete synchronization and use the reduced equations to derive a stability condition for completely synchronized trajectories on S d‑1. We further generalize the vector transform to a mapping which acts in and in particular preserves the unit ball , and leaves invariant the cross ratios constructed from inner products of vectors in . This mapping can be used to partially integrate a system of vector oscillators with trajectories in , and for d  =  2 leads to an extension of the Kuramoto system to a system of oscillators with time-dependent amplitudes and trajectories in the unit disk. We find an inequivalent generalization of the Möbius map which also preserves but leaves invariant a different set of cross ratios, this time constructed from the vector norms. This leads to a different extension of the Kuramoto model with trajectories in the complex plane that can be partially integrated by means of fractional linear transformations.

  16. General Dimensional Multiple-Output Support Vector Regressions and Their Multiple Kernel Learning.

    Science.gov (United States)

    Chung, Wooyong; Kim, Jisu; Lee, Heejin; Kim, Euntai

    2015-11-01

    Support vector regression has been considered as one of the most important regression or function approximation methodologies in a variety of fields. In this paper, two new general dimensional multiple output support vector regressions (MSVRs) named SOCPL1 and SOCPL2 are proposed. The proposed methods are formulated in the dual space and their relationship with the previous works is clearly investigated. Further, the proposed MSVRs are extended into the multiple kernel learning and their training is implemented by the off-the-shelf convex optimization tools. The proposed MSVRs are applied to benchmark problems and their performances are compared with those of the previous methods in the experimental section.

  17. Anisotropic fractal media by vector calculus in non-integer dimensional space

    International Nuclear Information System (INIS)

    Tarasov, Vasily E.

    2014-01-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media

  18. Anisotropic fractal media by vector calculus in non-integer dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru [Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow 119991 (Russian Federation)

    2014-08-15

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  19. Anisotropic fractal media by vector calculus in non-integer dimensional space

    Science.gov (United States)

    Tarasov, Vasily E.

    2014-08-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  20. Vectors

    DEFF Research Database (Denmark)

    Boeriis, Morten; van Leeuwen, Theo

    2017-01-01

    should be taken into account in discussing ‘reactions’, which Kress and van Leeuwen link only to eyeline vectors. Finally, the question can be raised as to whether actions are always realized by vectors. Drawing on a re-reading of Rudolf Arnheim’s account of vectors, these issues are outlined......This article revisits the concept of vectors, which, in Kress and van Leeuwen’s Reading Images (2006), plays a crucial role in distinguishing between ‘narrative’, action-oriented processes and ‘conceptual’, state-oriented processes. The use of this concept in image analysis has usually focused...

  1. A Novel Medical Freehand Sketch 3D Model Retrieval Method by Dimensionality Reduction and Feature Vector Transformation

    Directory of Open Access Journals (Sweden)

    Zhang Jing

    2016-01-01

    Full Text Available To assist physicians to quickly find the required 3D model from the mass medical model, we propose a novel retrieval method, called DRFVT, which combines the characteristics of dimensionality reduction (DR and feature vector transformation (FVT method. The DR method reduces the dimensionality of feature vector; only the top M low frequency Discrete Fourier Transform coefficients are retained. The FVT method does the transformation of the original feature vector and generates a new feature vector to solve the problem of noise sensitivity. The experiment results demonstrate that the DRFVT method achieves more effective and efficient retrieval results than other proposed methods.

  2. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  3. Towards a physics on fractals: Differential vector calculus in three-dimensional continuum with fractal metric

    Science.gov (United States)

    Balankin, Alexander S.; Bory-Reyes, Juan; Shapiro, Michael

    2016-02-01

    One way to deal with physical problems on nowhere differentiable fractals is the mapping of these problems into the corresponding problems for continuum with a proper fractal metric. On this way different definitions of the fractal metric were suggested to account for the essential fractal features. In this work we develop the metric differential vector calculus in a three-dimensional continuum with a non-Euclidean metric. The metric differential forms and Laplacian are introduced, fundamental identities for metric differential operators are established and integral theorems are proved by employing the metric version of the quaternionic analysis for the Moisil-Teodoresco operator, which has been introduced and partially developed in this paper. The relations between the metric and conventional operators are revealed. It should be emphasized that the metric vector calculus developed in this work provides a comprehensive mathematical formalism for the continuum with any suitable definition of fractal metric. This offers a novel tool to study physics on fractals.

  4. A method for real-time three-dimensional vector velocity imaging

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Nikolov, Svetoslav

    2003-01-01

    The paper presents an approach for making real-time three-dimensional vector flow imaging. Synthetic aperture data acquisition is used, and the data is beamformed along the flow direction to yield signals usable for flow estimation. The signals are cross-related to determine the shift in position...... are done using 16 × 16 = 256 elements at a time and the received signals from the same elements are sampled. Access to the individual elements is done through 16-to-1 multiplexing, so that only a 256 channels transmitting and receiving system are needed. The method has been investigated using Field II...

  5. A terrestrial lidar-based workflow for determining three-dimensional slip vectors and associated uncertainties

    Science.gov (United States)

    Gold, Peter O.; Cowgill, Eric; Kreylos, Oliver; Gold, Ryan D.

    2012-01-01

    Three-dimensional (3D) slip vectors recorded by displaced landforms are difficult to constrain across complex fault zones, and the uncertainties associated with such measurements become increasingly challenging to assess as landforms degrade over time. We approach this problem from a remote sensing perspective by using terrestrial laser scanning (TLS) and 3D structural analysis. We have developed an integrated TLS data collection and point-based analysis workflow that incorporates accurate assessments of aleatoric and epistemic uncertainties using experimental surveys, Monte Carlo simulations, and iterative site reconstructions. Our scanning workflow and equipment requirements are optimized for single-operator surveying, and our data analysis process is largely completed using new point-based computing tools in an immersive 3D virtual reality environment. In a case study, we measured slip vector orientations at two sites along the rupture trace of the 1954 Dixie Valley earthquake (central Nevada, United States), yielding measurements that are the first direct constraints on the 3D slip vector for this event. These observations are consistent with a previous approximation of net extension direction for this event. We find that errors introduced by variables in our survey method result in <2.5 cm of variability in components of displacement, and are eclipsed by the 10–60 cm epistemic errors introduced by reconstructing the field sites to their pre-erosion geometries. Although the higher resolution TLS data sets enabled visualization and data interactivity critical for reconstructing the 3D slip vector and for assessing uncertainties, dense topographic constraints alone were not sufficient to significantly narrow the wide (<26°) range of allowable slip vector orientations that resulted from accounting for epistemic uncertainties.

  6. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    Science.gov (United States)

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  7. High energy beta rays and vectors of Bilharzia and Fasciola

    International Nuclear Information System (INIS)

    Fletcher, J.J.; Akpa, T.C.; Dim, L.A.; Ogunsusi, R.

    1988-01-01

    Preliminary investigations of the effects of high energy beta rays on Lymnea natalensis, the snail vector of Schistosoma haematobium have been conducted. Results show that in both stream and tap water, about 70% of the snails die when irradiated for up to 18 hours using a 15m Ci Sr-90 beta source. The rest of the snails die without further irradiation in 24 hours. It may then be possible to control the vectors of Bilharzia and Fasciola by using both the direct and indirect effects of high energy betas. (author)

  8. High energy beta rays and vectors of Bilharzia and Fasciola

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, J.J.; Akpa, T.C.; Dim, L.A.; Ogunsusi, R.

    1988-01-01

    Preliminary investigations of the effects of high energy beta rays on Lymnea natalensis, the snail vector of Schistosoma haematobium have been conducted. Results show that in both stream and tap water, about 70% of the snails die when irradiated for up to 18 hours using a 15m Ci Sr-90 beta source. The rest of the snails die without further irradiation in 24 hours. It may then be possible to control the vectors of Bilharzia and Fasciola by using both the direct and indirect effects of high energy betas.

  9. A lower dimensional feature vector for identification of partial discharges of different origin using time measurements

    International Nuclear Information System (INIS)

    Evagorou, Demetres; Kyprianou, Andreas; Georghiou, George E; Lewin, Paul L; Stavrou, Andreas

    2012-01-01

    Partial discharge (PD) classification into sources of different origin is essential in evaluating the severity of the damage caused by its activity on the insulation of power cables and their accessories. More specifically, some types of PD can be classified as having a detrimental effect on the integrity of the insulation while others can be deemed relatively harmless, rendering the correct classification of different PD types of vital importance to electrical utilities. In this work, a feature vector was proposed based on higher order statistics on selected nodes of the wavelet packet transform (WPT) coefficients of time domain measurements, which can compactly represent the characteristics of different PD sources. To assess its performance, experimental data acquired under laboratory conditions for four different PD sources encountered in power systems were used. The two learning machine methods, namely the support vector machine and the probabilistic neural network, employed as the classification algorithms, achieved overall classification rates of around 98%. In comparison, the utilization of the scaled, raw WPT coefficients as a feature vector resulted in classification accuracy of around 99%, but with a significantly higher number of dimensions (1304 to 16), validating the PD identification ability of the proposed feature. Dimensionality reduction becomes a key factor in online, real-time data collection and processing of PD measurements, reducing the classification effort and the data-storage requirements. Therefore, the proposed method can constitute a potential tool for such online measurements, after addressing issues related to on-site measurements such as the rejection of interference. (paper)

  10. Predicting respiratory tumor motion with multi-dimensional adaptive filters and support vector regression

    International Nuclear Information System (INIS)

    Riaz, Nadeem; Wiersma, Rodney; Mao Weihua; Xing Lei; Shanker, Piyush; Gudmundsson, Olafur; Widrow, Bernard

    2009-01-01

    Intra-fraction tumor tracking methods can improve radiation delivery during radiotherapy sessions. Image acquisition for tumor tracking and subsequent adjustment of the treatment beam with gating or beam tracking introduces time latency and necessitates predicting the future position of the tumor. This study evaluates the use of multi-dimensional linear adaptive filters and support vector regression to predict the motion of lung tumors tracked at 30 Hz. We expand on the prior work of other groups who have looked at adaptive filters by using a general framework of a multiple-input single-output (MISO) adaptive system that uses multiple correlated signals to predict the motion of a tumor. We compare the performance of these two novel methods to conventional methods like linear regression and single-input, single-output adaptive filters. At 400 ms latency the average root-mean-square-errors (RMSEs) for the 14 treatment sessions studied using no prediction, linear regression, single-output adaptive filter, MISO and support vector regression are 2.58, 1.60, 1.58, 1.71 and 1.26 mm, respectively. At 1 s, the RMSEs are 4.40, 2.61, 3.34, 2.66 and 1.93 mm, respectively. We find that support vector regression most accurately predicts the future tumor position of the methods studied and can provide a RMSE of less than 2 mm at 1 s latency. Also, a multi-dimensional adaptive filter framework provides improved performance over single-dimension adaptive filters. Work is underway to combine these two frameworks to improve performance.

  11. High-Performance Matrix-Vector Multiplication on the GPU

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    In this paper, we develop a high-performance GPU kernel for one of the most popular dense linear algebra operations, the matrix-vector multiplication. The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture), which is designed from the ground up for scientific computing...

  12. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    , current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI...

  13. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  14. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  15. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando

    2017-01-01

    aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel

  16. Command vector memory systems: high performance at low cost

    OpenAIRE

    Corbal San Adrián, Jesús; Espasa Sans, Roger; Valero Cortés, Mateo

    1998-01-01

    The focus of this paper is on designing both a low cost and high performance, high bandwidth vector memory system that takes advantage of modern commodity SDRAM memory chips. To successfully extract the full bandwidth from SDRAM parts, we propose a new memory system organization based on sending commands to the memory system as opposed to sending individual addresses. A command specifies, in a few bytes, a request for multiple independent memory words. A command is similar to a burst found in...

  17. High dimensional neurocomputing growth, appraisal and applications

    CERN Document Server

    Tripathi, Bipin Kumar

    2015-01-01

    The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...

  18. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  19. The Use of Sparse Direct Solver in Vector Finite Element Modeling for Calculating Two Dimensional (2-D) Magnetotelluric Responses in Transverse Electric (TE) Mode

    Science.gov (United States)

    Yihaa Roodhiyah, Lisa’; Tjong, Tiffany; Nurhasan; Sutarno, D.

    2018-04-01

    The late research, linear matrices of vector finite element in two dimensional(2-D) magnetotelluric (MT) responses modeling was solved by non-sparse direct solver in TE mode. Nevertheless, there is some weakness which have to be improved especially accuracy in the low frequency (10-3 Hz-10-5 Hz) which is not achieved yet and high cost computation in dense mesh. In this work, the solver which is used is sparse direct solver instead of non-sparse direct solverto overcome the weaknesses of solving linear matrices of vector finite element metod using non-sparse direct solver. Sparse direct solver will be advantageous in solving linear matrices of vector finite element method because of the matrix properties which is symmetrical and sparse. The validation of sparse direct solver in solving linear matrices of vector finite element has been done for a homogen half-space model and vertical contact model by analytical solution. Thevalidation result of sparse direct solver in solving linear matrices of vector finite element shows that sparse direct solver is more stable than non-sparse direct solver in computing linear problem of vector finite element method especially in low frequency. In the end, the accuracy of 2D MT responses modelling in low frequency (10-3 Hz-10-5 Hz) has been reached out under the efficient allocation memory of array and less computational time consuming.

  20. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  1. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  2. Static investigation of two fluidic thrust-vectoring concepts on a two-dimensional convergent-divergent nozzle

    Science.gov (United States)

    Wing, David J.

    1994-01-01

    A static investigation was conducted in the static test facility of the Langley 16-Foot Transonic Tunnel of two thrust-vectoring concepts which utilize fluidic mechanisms for deflecting the jet of a two-dimensional convergent-divergent nozzle. One concept involved using the Coanda effect to turn a sheet of injected secondary air along a curved sidewall flap and, through entrainment, draw the primary jet in the same direction to produce yaw thrust vectoring. The other concept involved deflecting the primary jet to produce pitch thrust vectoring by injecting secondary air through a transverse slot in the divergent flap, creating an oblique shock in the divergent channel. Utilizing the Coanda effect to produce yaw thrust vectoring was largely unsuccessful. Small vector angles were produced at low primary nozzle pressure ratios, probably because the momentum of the primary jet was low. Significant pitch thrust vector angles were produced by injecting secondary flow through a slot in the divergent flap. Thrust vector angle decreased with increasing nozzle pressure ratio but moderate levels were maintained at the highest nozzle pressure ratio tested. Thrust performance generally increased at low nozzle pressure ratios and decreased near the design pressure ratio with the addition of secondary flow.

  3. In-Vivo High Dynamic Range Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Jensen, Jørgen Arendt

    2015-01-01

    example with a high dynamic velocity range. Velocities with an order of magnitude apart are detected on the femoral artery of a 41 years old healthy individual. Three distinct heart cycles are captured during a 3 secs acquisition. The estimated vector velocities are compared against each other within...... the heart cycle. The relative standard deviation of the measured velocity magnitude between the three peak systoles was found to be 5.11% with a standard deviation on the detected angle of 1.06◦ . In the diastole, it was 1.46% and 6.18◦ , respectively. Results proves that the method is able to estimate flow...

  4. Clustering high dimensional data using RIA

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.

  5. All ASD complex and real 4-dimensional Einstein spaces with Λ≠0 admitting a nonnull Killing vector

    Science.gov (United States)

    Chudecki, Adam

    2016-12-01

    Anti-self-dual (ASD) 4-dimensional complex Einstein spaces with nonzero cosmological constant Λ equipped with a nonnull Killing vector are considered. It is shown that any conformally nonflat metric of such spaces can be always brought to a special form and the Einstein field equations can be reduced to the Boyer-Finley-Plebański equation (Toda field equation). Some alternative forms of the metric are discussed. All possible real slices (neutral, Euclidean and Lorentzian) of ASD complex Einstein spaces with Λ≠0 admitting a nonnull Killing vector are found.

  6. Highly conducting one-dimensional solids

    CERN Document Server

    Evrard, Roger; Doren, Victor

    1979-01-01

    Although the problem of a metal in one dimension has long been known to solid-state physicists, it was not until the synthesis of real one-dimensional or quasi-one-dimensional systems that this subject began to attract considerable attention. This has been due in part to the search for high­ temperature superconductivity and the possibility of reaching this goal with quasi-one-dimensional substances. A period of intense activity began in 1973 with the report of a measurement of an apparently divergent conduc­ tivity peak in TfF-TCNQ. Since then a great deal has been learned about quasi-one-dimensional conductors. The emphasis now has shifted from trying to find materials of very high conductivity to the many interesting problems of physics and chemistry involved. But many questions remain open and are still under active investigation. This book gives a review of the experimental as well as theoretical progress made in this field over the last years. All the chapters have been written by scientists who have ...

  7. Three-Dimensional Electromagnetic High Frequency Axisymmetric Cavity Scars.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt

    2014-10-01

    This report examines the localization of high frequency electromagnetic fi elds in three-dimensional axisymmetric cavities along periodic paths between opposing sides of the cavity. The cases where these orbits lead to unstable localized modes are known as scars. This report treats both the case where the opposing sides, or mirrors, are convex, where there are no interior foci, and the case where they are concave, leading to interior foci. The scalar problem is treated fi rst but the approximations required to treat the vector fi eld components are also examined. Particular att ention is focused on the normalization through the electromagnetic energy theorem. Both projections of the fi eld along the scarred orbit as well as point statistics are examined. Statistical comparisons are m ade with a numerical calculation of the scars run with an axisymmetric simulation. This axisymmetric cas eformstheoppositeextreme(wherethetwomirror radii at each end of the ray orbit are equal) from the two -dimensional solution examined previously (where one mirror radius is vastly di ff erent from the other). The enhancement of the fi eldontheorbitaxiscanbe larger here than in the two-dimensional case. Intentionally Left Blank

  8. High-energy vector boson scattering after the Higgs discovery

    International Nuclear Information System (INIS)

    Kilian, Wolfgang; Sekulla, Marco; Ohl, Thorsten; Reuter, Juergen

    2014-08-01

    Weak vector-boson W,Z scattering at high energy probes the Higgs sector and is most sensitive to any new physics associated with electroweak symmetry breaking. We show that in the presence of the 125 GeV Higgs boson, a conventional effective-theory analysis fails for this class of processes. We propose to extrapolate the effective-theory ansatz by an extension of the parameter-free K-matrix unitarization prescription, which we denote as direct T-matrix unitarization. We generalize this prescription to arbitrary non-perturbative models and describe the implementation, as an asymptotically consistent reference model matched to the low-energy effective theory. We present exemplary numerical results for full six-fermion processes at the LHC.

  9. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  10. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    International Nuclear Information System (INIS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-01-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  11. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Science.gov (United States)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  12. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  13. Introduction to high-dimensional statistics

    CERN Document Server

    Giraud, Christophe

    2015-01-01

    Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha

  14. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  15. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  16. Topology of high-dimensional manifolds

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)

    2002-08-15

    The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.

  17. Unidirectional Wave Vector Manipulation in Two-Dimensional Space with an All Passive Acoustic Parity-Time-Symmetric Metamaterials Crystal

    Science.gov (United States)

    Liu, Tuo; Zhu, Xuefeng; Chen, Fei; Liang, Shanjun; Zhu, Jie

    2018-03-01

    Exploring the concept of non-Hermitian Hamiltonians respecting parity-time symmetry with classical wave systems is of great interest as it enables the experimental investigation of parity-time-symmetric systems through the quantum-classical analogue. Here, we demonstrate unidirectional wave vector manipulation in two-dimensional space, with an all passive acoustic parity-time-symmetric metamaterials crystal. The metamaterials crystal is constructed through interleaving groove- and holey-structured acoustic metamaterials to provide an intrinsic parity-time-symmetric potential that is two-dimensionally extended and curved, which allows the flexible manipulation of unpaired wave vectors. At the transition point from the unbroken to broken parity-time symmetry phase, the unidirectional sound focusing effect (along with reflectionless acoustic transparency in the opposite direction) is experimentally realized over the spectrum. This demonstration confirms the capability of passive acoustic systems to carry the experimental studies on general parity-time symmetry physics and further reveals the unique functionalities enabled by the judiciously tailored unidirectional wave vectors in space.

  18. Numerical simulation of multi-dimensional two-phase flow based on flux vector splitting

    Energy Technology Data Exchange (ETDEWEB)

    Staedtke, H.; Franchello, G.; Worth, B. [Joint Research Centre - Ispra Establishment (Italy)

    1995-09-01

    This paper describes a new approach to the numerical simulation of transient, multidimensional two-phase flow. The development is based on a fully hyperbolic two-fluid model of two-phase flow using separated conservation equations for the two phases. Features of the new model include the existence of real eigenvalues, and a complete set of independent eigenvectors which can be expressed algebraically in terms of the major dependent flow parameters. This facilitates the application of numerical techniques specifically developed for high speed single-phase gas flows which combine signal propagation along characteristic lines with the conservation property with respect to mass, momentum and energy. Advantages of the new model for the numerical simulation of one- and two- dimensional two-phase flow are discussed.

  19. The validation and assessment of machine learning: a game of prediction from high-dimensional data

    DEFF Research Database (Denmark)

    Pers, Tune Hannes; Albrechtsen, A; Holst, C

    2009-01-01

    In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....

  20. Monte Carlo simulation of the three-state vector Potts model on a three-dimensional random lattice

    International Nuclear Information System (INIS)

    Jianbo Zhang; Heping Ying

    1991-09-01

    We have performed a numerical simulation of the three-state vector Potts model on a three-dimensional random lattice. The averages of energy density, magnetization, specific heat and susceptibility of the system in the N 3 (N=8,10,12) lattices were calculated. The results show that a first order nature of the Z(3) symmetry breaking transition appears, as characterized by a thermal hysterisis in the energy density as well as an abrupt drop of magnetization being sharper and discontinuous with increasing of volume in the cross-over region. The results obtained on the random lattice were consistent with those obtained on the three-dimensional cubic lattice. (author). 12 refs, 4 figs

  1. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  2. High-speed vector-processing system of the MELCOM-COSMO 900II

    Energy Technology Data Exchange (ETDEWEB)

    Masuda, K; Mori, H; Fujikake, J; Sasaki, Y

    1983-01-01

    Progress in scientific and technical calculations has lead to a growing demand for high-speed vector calculations. Mitsubishi electric has developed an integrated array processor and automatic-vectorizing fortran compiler as an option for the MELCOM-COSMO 900II computer system. This facilitates the performance of vector calculations and matrix calculations, achieving significant gains in cost-effectiveness. The article outlines the high-speed vector system, includes discussion of compiler structuring, and cites examples of effective system application. 1 reference.

  3. Highly evolvable malaria vectors : The genomes of 16 Anopheles mosquitoes

    NARCIS (Netherlands)

    Neafsey, D. E.; Waterhouse, R. M.; Abai, M. R.; Aganezov, S. S.; Alekseyev, M. A.; Allen, J. E.; Amon, J.; Arca, B.; Arensburger, P.; Artemov, G.; Assour, L. A.; Basseri, H.; Berlin, A.; Birren, B. W.; Blandin, S. A.; Brockman, A. I.; Burkot, T. R.; Burt, A.; Chan, C. S.; Chauve, C.; Chiu, J. C.; Christensen, M.; Costantini, C.; Davidson, V. L. M.; Deligianni, E.; Dottorini, T.; Dritsou, V.; Gabriel, S. B.; Guelbeogo, W. M.; Hall, A. B.; Han, M. V.; Hlaing, T.; Hughes, D. S. T.; Jenkins, A. M.; Jiang, X.; Jungreis, I.; Kakani, E. G.; Kamali, M.; Kemppainen, P.; Kennedy, R. C.; Kirmitzoglou, I. K.; Koekemoer, L. L.; Laban, N.; Langridge, N.; Lawniczak, M. K. N.; Lirakis, M.; Lobo, N. F.; Lowy, E.; Maccallum, R. M.; Mao, C.; Maslen, G.; Mbogo, C.; Mccarthy, J.; Michel, K.; Mitchell, S. N.; Moore, W.; Murphy, K. A.; Naumenko, A. N.; Nolan, T.; Novoa, E. M.; O'loughlin, S.; Oringanje, C.; Oshaghi, M. A.; Pakpour, N.; Papathanos, P. A.; Peery, A. N.; Povelones, M.; Prakash, A.; Price, D. P.; Rajaraman, A.; Reimer, L. J.; Rinker, D. C.; Rokas, A.; Russell, T. L.; Sagnon, N.; Sharakhova, M. V.; Shea, T.; Simao, F. A.; Simard, F.; Slotman, M. A.; Somboon, P.; Stegniy, V.; Struchiner, C. J.; Thomas, G. W. C.; Tojo, M.; Topalis, P.; Tubio, J. M. C.; Unger, M. F.; Vontas, J.; Walton, C.; Wilding, C. S.; Willis, J. H.; Wu, Y.-c.; Yan, G.; Zdobnov, E. M.; Zhou, X.; Catteruccia, F.; Christophides, G. K.; Collins, F. H.; Cornman, R. S.; Crisanti, A.; Donnelly, M. J.; Emrich, S. J.; Fontaine, M. C.; Gelbart, W.; Hahn, M. W.; Hansen, I. A.; Howell, P. I.; Kafatos, F. C.; Kellis, M.; Lawson, D.; Louis, C.; Luckhart, S.; Muskavitch, M. A. T.; Ribeiro, J. M.; Riehle, M. A.; Sharakhov, I. V.; Tu, Z.; Zwiebel, L. J.; Besansky, N. J.

    2015-01-01

    Variation in vectorial capacity for human malaria among Anopheles mosquito species is determined by many factors, including behavior, immunity, and life history. To investigate the genomic basis of vectorial capacity and explore new avenues for vector control, we sequenced the genomes of 16

  4. Geometric Representations of Condition Queries on Three-Dimensional Vector Fields

    Science.gov (United States)

    Henze, Chris

    1999-01-01

    Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.

  5. Three-dimensional tumor spheroids for in vitro analysis of bacteria as gene delivery vectors in tumor therapy.

    Science.gov (United States)

    Osswald, Annika; Sun, Zhongke; Grimm, Verena; Ampem, Grace; Riegel, Karin; Westendorf, Astrid M; Sommergruber, Wolfgang; Otte, Kerstin; Dürre, Peter; Riedel, Christian U

    2015-12-12

    Several studies in animal models demonstrated that obligate and facultative anaerobic bacteria of the genera Bifidobacterium, Salmonella, or Clostridium specifically colonize solid tumors. Consequently, these and other bacteria are discussed as live vectors to deliver therapeutic genes to inhibit tumor growth. Therapeutic approaches for cancer treatment using anaerobic bacteria have been investigated in different mouse models. In the present study, solid three-dimensional (3D) multicellular tumor spheroids (MCTS) of the colorectal adenocarcinoma cell line HT-29 were generated and tested for their potential to study prodrug-converting enzyme therapies using bacterial vectors in vitro. HT-29 MCTS resembled solid tumors displaying all relevant features with an outer zone of proliferating cells and hypoxic and apoptotic regions in the core. Upon incubation with HT-29 MCTS, Bifidobacterium bifidum S17 and Salmonella typhimurium YB1 selectively localized, survived and replicated in hypoxic areas inside MCTS. Furthermore, spores of the obligate anaerobe Clostridium sporogenes germinated in these hypoxic areas. To further evaluate the potential of MCTS to investigate therapeutic approaches using bacteria as gene delivery vectors, recombinant bifidobacteria expressing prodrug-converting enzymes were used. Expression of a secreted cytosine deaminase in combination with 5-fluorocytosine had no effect on growth of MCTS due to an intrinsic resistance of HT-29 cells to 5-fluorouracil, i.e. the converted drug. However, a combination of the prodrug CB1954 and a strain expressing a secreted chromate reductase effectively inhibited MCTS growth. Collectively, the presented results indicate that MCTS are a suitable and reliable model to investigate live bacteria as gene delivery vectors for cancer therapy in vitro.

  6. Using a Feature Subset Selection method and Support Vector Machine to address curse of dimensionality and redundancy in Hyperion hyperspectral data classification

    Directory of Open Access Journals (Sweden)

    Amir Salimi

    2018-04-01

    Full Text Available The curse of dimensionality resulted from insufficient training samples and redundancy is considered as an important problem in the supervised classification of hyperspectral data. This problem can be handled by Feature Subset Selection (FSS methods and Support Vector Machine (SVM. The FSS methods can manage the redundancy by removing redundant spectral bands. Moreover, kernel based methods, especially SVM have a high ability to classify limited-sample data sets. This paper mainly aims to assess the capability of a FSS method and the SVM in curse of dimensional circumstances and to compare results with the Artificial Neural Network (ANN, when they are used to classify alteration zones of the Hyperion hyperspectral image acquired from the greatest Iranian porphyry copper complex. The results demonstrated that by decreasing training samples, the accuracy of SVM was just decreased 1.8% while the accuracy of ANN was highly reduced i.e. 14.01%. In addition, a hybrid FSS was applied to reduce the dimension of Hyperion. Accordingly, among the 165 useable spectral bands of Hyperion, 18 bands were only selected as the most important and informative bands. Although this dimensionality reduction could not intensively improve the performance of SVM, ANN revealed a significant improvement in the computational time and a slightly enhancement in the average accuracy. Therefore, SVM as a low-sensitive method respect to the size of training data set and feature space can be applied to classify the curse of dimensional problems. Also, the FSS methods can improve the performance of non-kernel based classifiers by eliminating redundant features. Keywords: Curse of dimensionality, Feature Subset Selection, Hydrothermal alteration, Hyperspectral, SVM

  7. Linear stability theory as an early warning sign for transitions in high dimensional complex systems

    International Nuclear Information System (INIS)

    Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft

    2016-01-01

    We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)

  8. Neural networks for the dimensionality reduction of GOME measurement vector in the estimation of ozone profiles

    International Nuclear Information System (INIS)

    Del Frate, F.; Iapaolo, M.; Casadio, S.; Godin-Beekmann, S.; Petitdidier, M.

    2005-01-01

    Dimensionality reduction can be of crucial importance in the application of inversion schemes to atmospheric remote sensing data. In this study the problem of dimensionality reduction in the retrieval of ozone concentration profiles from the radiance measurements provided by the instrument Global Ozone Monitoring Experiment (GOME) on board of ESA satellite ERS-2 is considered. By means of radiative transfer modelling, neural networks and pruning algorithms, a complete procedure has been designed to extract the GOME spectral ranges most crucial for the inversion. The quality of the resulting retrieval algorithm has been evaluated by comparing its performance to that yielded by other schemes and co-located profiles obtained with lidar measurements

  9. Insect cell transformation vectors that support high level expression and promoter assessment in insect cell culture

    Science.gov (United States)

    A somatic transformation vector, pDP9, was constructed that provides a simplified means of producing permanently transformed cultured insect cells that support high levels of protein expression of foreign genes. The pDP9 plasmid vector incorporates DNA sequences from the Junonia coenia densovirus th...

  10. Hybrid RGSA and Support Vector Machine Framework for Three-Dimensional Magnetic Resonance Brain Tumor Classification

    Directory of Open Access Journals (Sweden)

    R. Rajesh Sharma

    2015-01-01

    algorithm (RGSA. Support vector machines, over backpropagation network, and k-nearest neighbor are used to evaluate the goodness of classifier approach. The preliminary evaluation of the system is performed using 320 real-time brain MRI images. The system is trained and tested by using a leave-one-case-out method. The performance of the classifier is tested using the receiver operating characteristic curve of 0.986 (±002. The experimental results demonstrate the systematic and efficient feature extraction and feature selection algorithm to the performance of state-of-the-art feature classification methods.

  11. Representation theory of 2-groups on finite dimensional 2-vector spaces

    OpenAIRE

    Elgueta, Josep

    2004-01-01

    In this paper, the 2-category $\\mathfrak{Rep}_{{\\bf 2Mat}_{\\mathbb{C}}}(\\mathbb{G})$ of (weak) representations of an arbitrary (weak) 2-group $\\mathbb{G}$ on (some version of) Kapranov and Voevodsky's 2-category of (complex) 2-vector spaces is studied. In particular, the set of equivalence classes of representations is computed in terms of the invariants $\\pi_0(\\mathbb{G})$, $\\pi_1(\\mathbb{G})$ and $[\\alpha]\\in H^3(\\pi_0(\\mathbb{G}),\\pi_1(\\mathbb{G}))$ classifying $\\mathbb{G}$. Also the categ...

  12. Migration transformation of two-dimensional magnetic vector and tensor fields

    DEFF Research Database (Denmark)

    Zhdanov, Michael; Cai, Hongzhu; Wilson, Glenn

    2012-01-01

    We introduce a new method of rapid interpretation of magnetic vector and tensor field data, based on ideas of potential field migration which extends the general principles of seismic and electromagnetic migration to potential fields. 2-D potential field migration represents a direct integral...... to the downward continuation of a well-behaved analytical function. We present case studies for imaging of SQUID-based magnetic tensor data acquired over a magnetite skarn at Tallawang, Australia. The results obtained from magnetic tensor field migration agree very well with both Euler deconvolution and the known...

  13. Application of support vector machine to three-dimensional shape-based virtual screening using comprehensive three-dimensional molecular shape overlay with known inhibitors.

    Science.gov (United States)

    Sato, Tomohiro; Yuki, Hitomi; Takaya, Daisuke; Sasaki, Shunta; Tanaka, Akiko; Honma, Teruki

    2012-04-23

    In this study, machine learning using support vector machine was combined with three-dimensional (3D) molecular shape overlay, to improve the screening efficiency. Since the 3D molecular shape overlay does not use fingerprints or descriptors to compare two compounds, unlike 2D similarity methods, the application of machine learning to a 3D shape-based method has not been extensively investigated. The 3D similarity profile of a compound is defined as the array of 3D shape similarities with multiple known active compounds of the target protein and is used as the explanatory variable of support vector machine. As the measures of 3D shape similarity for our new prediction models, the prediction performances of the 3D shape similarity metrics implemented in ROCS, such as ShapeTanimoto and ScaledColor, were validated, using the known inhibitors of 15 target proteins derived from the ChEMBL database. The learning models based on the 3D similarity profiles stably outperformed the original ROCS when more than 10 known inhibitors were available as the queries. The results demonstrated the advantages of combining machine learning with the 3D similarity profile to process the 3D shape information of plural active compounds.

  14. Completeness of the System of Root Vectors of 2 × 2 Upper Triangular Infinite-Dimensional Hamiltonian Operators in Symplectic Spaces and Applications

    Institute of Scientific and Technical Information of China (English)

    Hua WANG; ALATANCANG; Junjie HUANG

    2011-01-01

    The authors investigate the completeness of the system of eigen or root vectors of the 2 x 2 upper triangular infinite-dimensional Hamiltonian operator H0.First,the geometrical multiplicity and the algebraic index of the eigenvalue of H0 are considered.Next,some necessary and sufficient conditions for the completeness of the system of eigen or root vectors of H0 are obtained. Finally,the obtained results are tested in several examples.

  15. On the Computation of Degenerate Hopf Bifurcations for n-Dimensional Multiparameter Vector Fields

    Directory of Open Access Journals (Sweden)

    Michail P. Markakis

    2016-01-01

    Full Text Available The restriction of an n-dimensional nonlinear parametric system on the center manifold is treated via a new proper symbolic form and analytical expressions of the involved quantities are obtained as functions of the parameters by lengthy algebraic manipulations combined with computer assisted calculations. Normal forms regarding degenerate Hopf bifurcations up to codimension 3, as well as the corresponding Lyapunov coefficients and bifurcation portraits, can be easily computed for any system under consideration.

  16. Generalized synthetic aperture radar automatic target recognition by convolutional neural network with joint use of two-dimensional principal component analysis and support vector machine

    Science.gov (United States)

    Zheng, Ce; Jiang, Xue; Liu, Xingzhao

    2017-10-01

    Convolutional neural network (CNN), as a vital part of the deep learning research field, has shown powerful potential for automatic target recognition (ATR) of synthetic aperture radar (SAR). However, the high complexity caused by the deep structure of CNN makes it difficult to generalize. An improved form of CNN with higher generalization capability and less probability of overfitting, which further improves the efficiency and robustness of the SAR ATR system, is proposed. The convolution layers of CNN are combined with a two-dimensional principal component analysis algorithm. Correspondingly, the kernel support vector machine is utilized as the classifier layer instead of the multilayer perceptron. The verification experiments are implemented using the moving and stationary target acquisition and recognition database, and the results validate the efficiency of the proposed method.

  17. Calculating vibrational spectra with sum of product basis functions without storing full-dimensional vectors or matrices.

    Science.gov (United States)

    Leclerc, Arnaud; Carrington, Tucker

    2014-05-07

    We propose an iterative method for computing vibrational spectra that significantly reduces the memory cost of calculations. It uses a direct product primitive basis, but does not require storing vectors with as many components as there are product basis functions. Wavefunctions are represented in a basis each of whose functions is a sum of products (SOP) and the factorizable structure of the Hamiltonian is exploited. If the factors of the SOP basis functions are properly chosen, wavefunctions are linear combinations of a small number of SOP basis functions. The SOP basis functions are generated using a shifted block power method. The factors are refined with a rank reduction algorithm to cap the number of terms in a SOP basis function. The ideas are tested on a 20-D model Hamiltonian and a realistic CH3CN (12 dimensional) potential. For the 20-D problem, to use a standard direct product iterative approach one would need to store vectors with about 10(20) components and would hence require about 8 × 10(11) GB. With the approach of this paper only 1 GB of memory is necessary. Results for CH3CN agree well with those of a previous calculation on the same potential.

  18. Clinical validation of coronal and sagittal spinal curve measurements based on three-dimensional vertebra vector parameters.

    Science.gov (United States)

    Somoskeöy, Szabolcs; Tunyogi-Csapó, Miklós; Bogyó, Csaba; Illés, Tamás

    2012-10-01

    For many decades, visualization and evaluation of three-dimensional (3D) spinal deformities have only been possible by two-dimensional (2D) radiodiagnostic methods, and as a result, characterization and classification were based on 2D terminologies. Recent developments in medical digital imaging and 3D visualization techniques including surface 3D reconstructions opened a chance for a long-sought change in this field. Supported by a 3D Terminology on Spinal Deformities of the Scoliosis Research Society, an approach for 3D measurements and a new 3D classification of scoliosis yielded several compelling concepts on 3D visualization and new proposals for 3D classification in recent years. More recently, a new proposal for visualization and complete 3D evaluation of the spine by 3D vertebra vectors has been introduced by our workgroup, a concept, based on EOS 2D/3D, a groundbreaking new ultralow radiation dose integrated orthopedic imaging device with sterEOS 3D spine reconstruction software. Comparison of accuracy, correlation of measurement values, intraobserver and interrater reliability of methods by conventional manual 2D and vertebra vector-based 3D measurements in a routine clinical setting. Retrospective, nonrandomized study of diagnostic X-ray images created as part of a routine clinical protocol of eligible patients examined at our clinic during a 30-month period between July 2007 and December 2009. In total, 201 individuals (170 females, 31 males; mean age, 19.88 years) including 10 healthy athletes with normal spine and patients with adolescent idiopathic scoliosis (175 cases), adult degenerative scoliosis (11 cases), and Scheuermann hyperkyphosis (5 cases). Overall range of coronal curves was between 2.4 and 117.5°. Analysis of accuracy and reliability of measurements was carried out on a group of all patients and in subgroups based on coronal plane deviation: 0 to 10° (Group 1; n=36), 10 to 25° (Group 2; n=25), 25 to 50° (Group 3; n=69), 50 to 75

  19. Manipulation of dielectric Rayleigh particles using highly focused elliptically polarized vector fields.

    Science.gov (United States)

    Gu, Bing; Xu, Danfeng; Rui, Guanghao; Lian, Meng; Cui, Yiping; Zhan, Qiwen

    2015-09-20

    Generation of vectorial optical fields with arbitrary polarization distribution is of great interest in areas where exotic optical fields are desired. In this work, we experimentally demonstrate the versatile generation of linearly polarized vector fields, elliptically polarized vector fields, and circularly polarized vortex beams through introducing attenuators in a common-path interferometer. By means of Richards-Wolf vectorial diffraction method, the characteristics of the highly focused elliptically polarized vector fields are studied. The optical force and torque on a dielectric Rayleigh particle produced by these tightly focused vector fields are calculated and exploited for the stable trapping of dielectric Rayleigh particles. It is shown that the additional degree of freedom provided by the elliptically polarized vector field allows one to control the spatial structure of polarization, to engineer the focusing field, and to tailor the optical force and torque on a dielectric Rayleigh particle.

  20. A family of E. coli expression vectors for laboratory scale and high throughput soluble protein production

    Directory of Open Access Journals (Sweden)

    Bottomley Stephen P

    2006-03-01

    Full Text Available Abstract Background In the past few years, both automated and manual high-throughput protein expression and purification has become an accessible means to rapidly screen and produce soluble proteins for structural and functional studies. However, many of the commercial vectors encoding different solubility tags require different cloning and purification steps for each vector, considerably slowing down expression screening. We have developed a set of E. coli expression vectors with different solubility tags that allow for parallel cloning from a single PCR product and can be purified using the same protocol. Results The set of E. coli expression vectors, encode for either a hexa-histidine tag or the three most commonly used solubility tags (GST, MBP, NusA and all with an N-terminal hexa-histidine sequence. The result is two-fold: the His-tag facilitates purification by immobilised metal affinity chromatography, whilst the fusion domains act primarily as solubility aids during expression, in addition to providing an optional purification step. We have also incorporated a TEV recognition sequence following the solubility tag domain, which allows for highly specific cleavage (using TEV protease of the fusion protein to yield native protein. These vectors are also designed for ligation-independent cloning and they possess a high-level expressing T7 promoter, which is suitable for auto-induction. To validate our vector system, we have cloned four different genes and also one gene into all four vectors and used small-scale expression and purification techniques. We demonstrate that the vectors are capable of high levels of expression and that efficient screening of new proteins can be readily achieved at the laboratory level. Conclusion The result is a set of four rationally designed vectors, which can be used for streamlined cloning, expression and purification of target proteins in the laboratory and have the potential for being adaptable to a high

  1. Hybrid three-dimensional and support vector machine approach for automatic vehicle tracking and classification using a single camera

    Science.gov (United States)

    Kachach, Redouane; Cañas, José María

    2016-05-01

    Using video in traffic monitoring is one of the most active research domains in the computer vision community. TrafficMonitor, a system that employs a hybrid approach for automatic vehicle tracking and classification on highways using a simple stationary calibrated camera, is presented. The proposed system consists of three modules: vehicle detection, vehicle tracking, and vehicle classification. Moving vehicles are detected by an enhanced Gaussian mixture model background estimation algorithm. The design includes a technique to resolve the occlusion problem by using a combination of two-dimensional proximity tracking algorithm and the Kanade-Lucas-Tomasi feature tracking algorithm. The last module classifies the shapes identified into five vehicle categories: motorcycle, car, van, bus, and truck by using three-dimensional templates and an algorithm based on histogram of oriented gradients and the support vector machine classifier. Several experiments have been performed using both real and simulated traffic in order to validate the system. The experiments were conducted on GRAM-RTM dataset and a proper real video dataset which is made publicly available as part of this work.

  2. Application of kinetic flux vector splitting scheme for solving multi-dimensional hydrodynamical models of semiconductor devices

    Science.gov (United States)

    Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul

    In this article, one and two-dimensional hydrodynamical models of semiconductor devices are numerically investigated. The models treat the propagation of electrons in a semiconductor device as the flow of a charged compressible fluid. It plays an important role in predicting the behavior of electron flow in semiconductor devices. Mathematically, the governing equations form a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the kinetic flux-vector splitting (KFVS) method for the hyperbolic step, and a semi-implicit Runge-Kutta method for the relaxation step. The KFVS method is based on the direct splitting of macroscopic flux functions of the system on the cell interfaces. The second order accuracy of the scheme is achieved by using MUSCL-type initial reconstruction and Runge-Kutta time stepping method. Several case studies are considered. For validation, the results of current scheme are compared with those obtained from the splitting scheme based on the NT central scheme. The effects of various parameters such as low field mobility, device length, lattice temperature and voltage are analyzed. The accuracy, efficiency and simplicity of the proposed KFVS scheme validates its generic applicability to the given model equations. A two dimensional simulation is also performed by KFVS method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.

  3. Vector dark energy and high-z massive clusters

    Science.gov (United States)

    Carlesi, Edoardo; Knebe, Alexander; Yepes, Gustavo; Gottlöber, Stefan; Jiménez, Jose Beltrán.; Maroto, Antonio L.

    2011-12-01

    The detection of extremely massive clusters at z > 1 such as SPT-CL J0546-5345, SPT-CL J2106-5844 and XMMU J2235.3-2557 has been considered by some authors as a challenge to the standard Λ cold dark matter cosmology. In fact, assuming Gaussian initial conditions, the theoretical expectation of detecting such objects is as low as ≤1 per cent. In this paper we discuss the probability of the existence of such objects in the light of the vector dark energy paradigm, showing by means of a series of N-body simulations that chances of detection are substantially enhanced in this non-standard framework.

  4. A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.

    Science.gov (United States)

    Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin

    2018-04-12

    This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.

  5. Geminivirus vectors for high-level expression of foreign proteins in plant cells.

    Science.gov (United States)

    Mor, Tsafrir S; Moon, Yong-Sun; Palmer, Kenneth E; Mason, Hugh S

    2003-02-20

    Bean yellow dwarf virus (BeYDV) is a monopartite geminivirus that can infect dicotyledonous plants. We have developed a high-level expression system that utilizes elements of the replication machinery of this single-stranded DNA virus. The replication initiator protein (Rep) mediates release and replication of a replicon from a DNA construct ("LSL vector") that contains an expression cassette for a gene of interest flanked by cis-acting elements of the virus. We used tobacco NT1 cells and biolistic delivery of plasmid DNA for evaluation of replication and expression of reporter genes contained within an LSL vector. By codelivery of a GUS reporter-LSL vector and a Rep-supplying vector, we obtained up to 40-fold increase in expression levels compared to delivery of the reporter-LSL vectors alone. High-copy replication of the LSL vector was correlated with enhanced expression of GUS. Rep expression using a whole BeYDV clone, a cauliflower mosaic virus 35S promoter driving either genomic rep or an intron-deleted rep gene, or 35S-rep contained in the LSL vector all achieved efficient replication and enhancement of GUS expression. We anticipate that this system can be adapted for use in transgenic plants or plant cell cultures with appropriately regulated expression of Rep, with the potential to greatly increase yield of recombinant proteins. Copyright 2003 Wiley Periodicals, Inc. Biotechnol Bioeng 81: 430-437, 2003.

  6. Fractional Killing-Yano Tensors and Killing Vectors Using the Caputo Derivative in Some One- and Two-Dimensional Curved Space

    Directory of Open Access Journals (Sweden)

    Ehab Malkawi

    2014-01-01

    Full Text Available The classical free Lagrangian admitting a constant of motion, in one- and two-dimensional space, is generalized using the Caputo derivative of fractional calculus. The corresponding metric is obtained and the fractional Christoffel symbols, Killing vectors, and Killing-Yano tensors are derived. Some exact solutions of these quantities are reported.

  7. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  8. High energy photoproduction of the rho and rho' vector mesons

    International Nuclear Information System (INIS)

    Bronstein, J.M.

    1977-01-01

    In an experiment in the broad band photon beam at Fermilab diffractive production of 2π + and 4π +- states from Be, Al, Cu, and Pb targets was observed. The 2π + data are dominated by the rho(770) and the 4π +- is dominated by the rho'(1500). The energy dependence of rho photoproduction from Be was measured, and no evidence was seen for energy variation of the forward cross section in the range 30 to 160 GeV. The forward cross section is consistent with its average value d sigma/dtlt. slash 0 = 3.42 +- 0.28 μb/GeV 2 over the entire range. For the /sub rho'// a mass of 1487 +- 20 MeV and a width of 675 +- 60 MeV are obtained. All quoted errors are statistical. A standard optical model analysis of the A dependence of the rho and rho'/ photoproduction yields the following results. f/sub rho'/ 2 /f/sub rho/ 2 = 3.7 +- 0.7, sigma /sub rho'//sigma /sub rho/ = 1.05 +- 0.18. Results for the photon coupling constants are in good agreement with GVMD and with the e + e - storage ring results. The approximate equality of the rho-nucleon and rho'-nucleon total cross sections is inconsistent with the diagonal version of GVMD and provides strong motivation for including transitions between different vector mesons in GVMD

  9. Highly efficient retrograde gene transfer into motor neurons by a lentiviral vector pseudotyped with fusion glycoprotein.

    Directory of Open Access Journals (Sweden)

    Miyabi Hirano

    Full Text Available The development of gene therapy techniques to introduce transgenes that promote neuronal survival and protection provides effective therapeutic approaches for neurological and neurodegenerative diseases. Intramuscular injection of adenoviral and adeno-associated viral vectors, as well as lentiviral vectors pseudotyped with rabies virus glycoprotein (RV-G, permits gene delivery into motor neurons in animal models for motor neuron diseases. Recently, we developed a vector with highly efficient retrograde gene transfer (HiRet by pseudotyping a human immunodeficiency virus type 1 (HIV-1-based vector with fusion glycoprotein B type (FuG-B or a variant of FuG-B (FuG-B2, in which the cytoplasmic domain of RV-G was replaced by the corresponding part of vesicular stomatitis virus glycoprotein (VSV-G. We have also developed another vector showing neuron-specific retrograde gene transfer (NeuRet with fusion glycoprotein C type, in which the short C-terminal segment of the extracellular domain and transmembrane/cytoplasmic domains of RV-G was substituted with the corresponding regions of VSV-G. These two vectors afford the high efficiency of retrograde gene transfer into different neuronal populations in the brain. Here we investigated the efficiency of the HiRet (with FuG-B2 and NeuRet vectors for retrograde gene transfer into motor neurons in the spinal cord and hindbrain in mice after intramuscular injection and compared it with the efficiency of the RV-G pseudotype of the HIV-1-based vector. The main highlight of our results is that the HiRet vector shows the most efficient retrograde gene transfer into both spinal cord and hindbrain motor neurons, offering its promising use as a gene therapeutic approach for the treatment of motor neuron diseases.

  10. High-energy manifestations of heavy quarks in axial-vector neutral currents

    International Nuclear Information System (INIS)

    Kizukuri, Y.; Ohba, I.; Okano, K.; Yamanaka, Y.

    1981-01-01

    A recent work by Collins, Wilczek, and Zee has attempted to manifest the incompleteness of the decoupling theorem in the axial-vector neutral currents at low energies. In the spirit of their work, we calculate corrections of the axial-vector neutral currents by virtual-heavy-quark exchange in the high-energy e + e - processes and estimate some observable quantities sensitive to virtual-heavy-quark masses which may be compared with experimental data at LEP energies

  11. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  12. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  13. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    Directory of Open Access Journals (Sweden)

    András Király

    2014-01-01

    Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.

  14. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    Science.gov (United States)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  15. Equivalent Vectors

    Science.gov (United States)

    Levine, Robert

    2004-01-01

    The cross-product is a mathematical operation that is performed between two 3-dimensional vectors. The result is a vector that is orthogonal or perpendicular to both of them. Learning about this for the first time while taking Calculus-III, the class was taught that if AxB = AxC, it does not necessarily follow that B = C. This seemed baffling. The…

  16. An Underwater Acoustic Vector Sensor with High Sensitivity and Broad Band

    Directory of Open Access Journals (Sweden)

    Hu Zhang

    2014-05-01

    Full Text Available Recently, acoustic vector sensor that use accelerators as sensing elements are widely used in underwater acoustic engineering, but the sensitivity of which at low frequency band is usually lower than -220 dB. In this paper, using a piezoelectric trilaminar optimized low frequency sensing element, we designed a high sensitivity internal placed ICP piezoelectric accelerometer as sensing element. Through structure optimization, we made a high sensitivity, broadband, small scale vector sensor. The working band is 10-2000 Hz, sound pressure sensitivity is -185 dB (at 100 Hz, outer diameter is 42 mm, length is 80 mm.

  17. High-dimensional data in economics and their (robust) analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  18. High-dimensional Data in Economics and their (Robust) Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  19. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  20. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    Science.gov (United States)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  1. Strategies to generate high-titer, high-potency recombinant AAV3 serotype vectors

    Directory of Open Access Journals (Sweden)

    Chen Ling

    2016-01-01

    Full Text Available Although recombinant adeno-associated virus serotype 3 (AAV3 vectors were largely ignored previously, owing to their poor transduction efficiency in most cells and tissues examined, our initial observation of the selective tropism of AAV3 serotype vectors for human liver cancer cell lines and primary human hepatocytes has led to renewed interest in this serotype. AAV3 vectors and their variants have recently proven to be extremely efficient in targeting human and nonhuman primate hepatocytes in vitro as well as in vivo. In the present studies, we wished to evaluate the relative contributions of the cis-acting inverted terminal repeats (ITRs from AAV3 (ITR3, as well as the trans-acting Rep proteins from AAV3 (Rep3 in the AAV3 vector production and transduction. To this end, we utilized two helper plasmids: pAAVr2c3, which carries rep2 and cap3 genes, and pAAVr3c3, which carries rep3 and cap3 genes. The combined use of AAV3 ITRs, AAV3 Rep proteins, and AAV3 capsids led to the production of recombinant vectors, AAV3-Rep3/ITR3, with up to approximately two to fourfold higher titers than AAV3-Rep2/ITR2 vectors produced using AAV2 ITRs, AAV2 Rep proteins, and AAV3 capsids. We also observed that the transduction efficiency of Rep3/ITR3 AAV3 vectors was approximately fourfold higher than that of Rep2/ITR2 AAV3 vectors in human hepatocellular carcinoma cell lines in vitro. The transduction efficiency of Rep3/ITR3 vectors was increased by ∼10-fold, when AAV3 capsids containing mutations in two surface-exposed residues (serine 663 and threonine 492 were used to generate a S663V+T492V double-mutant AAV3 vector. The Rep3/ITR3 AAV3 vectors also transduced human liver tumors in vivo approximately twofold more efficiently than those generated with Rep2/ITR2. Our data suggest that the transduction efficiency of AAV3 vectors can be significantly improved both using homologous Rep proteins and ITRs as well as by capsid optimization. Thus, the combined use of

  2. High-quality and interactive animations of 3D time-varying vector fields.

    Science.gov (United States)

    Helgeland, Anders; Elboth, Thomas

    2006-01-01

    In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.

  3. High frame rate synthetic aperture vector flow imaging for transthoracic echocardiography

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Bechsgaard, Thor

    2016-01-01

    This work presents the first in vivo results of 2-D high frame rate vector velocity imaging for transthoracic cardiac imaging. Measurements are made on a healthy volunteer using the SARUS experimental ultrasound scanner connected to an intercostal phased-array probe. Two parasternal long-axis vie...

  4. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  5. Kochen-Specker vectors

    International Nuclear Information System (INIS)

    Pavicic, Mladen; Merlet, Jean-Pierre; McKay, Brendan; Megill, Norman D

    2005-01-01

    We give a constructive and exhaustive definition of Kochen-Specker (KS) vectors in a Hilbert space of any dimension as well as of all the remaining vectors of the space. KS vectors are elements of any set of orthonormal states, i.e., vectors in an n-dimensional Hilbert space, H n , n≥3, to which it is impossible to assign 1s and 0s in such a way that no two mutually orthogonal vectors from the set are both assigned 1 and that not all mutually orthogonal vectors are assigned 0. Our constructive definition of such KS vectors is based on algorithms that generate MMP diagrams corresponding to blocks of orthogonal vectors in R n , on algorithms that single out those diagrams on which algebraic (0)-(1) states cannot be defined, and on algorithms that solve nonlinear equations describing the orthogonalities of the vectors by means of statistically polynomially complex interval analysis and self-teaching programs. The algorithms are limited neither by the number of dimensions nor by the number of vectors. To demonstrate the power of the algorithms, all four-dimensional KS vector systems containing up to 24 vectors were generated and described, all three-dimensional vector systems containing up to 30 vectors were scanned, and several general properties of KS vectors were found

  6. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....

  7. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  8. High-dimensional quantum cloning and applications to quantum hacking.

    Science.gov (United States)

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim

    2017-02-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.

  9. HSM: Heterogeneous Subspace Mining in High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Seidl, Thomas

    2009-01-01

    Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...

  10. Analysis of chaos in high-dimensional wind power system.

    Science.gov (United States)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  11. Performance and optimization of support vector machines in high-energy physics classification problems

    International Nuclear Information System (INIS)

    Sahin, M.Ö.; Krücker, D.; Melzer-Pellmann, I.-A.

    2016-01-01

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications.

  12. Performance and optimization of support vector machines in high-energy physics classification problems

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, M.Ö., E-mail: ozgur.sahin@desy.de; Krücker, D., E-mail: dirk.kruecker@desy.de; Melzer-Pellmann, I.-A., E-mail: isabell.melzer@desy.de

    2016-12-01

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications.

  13. A hybridized K-means clustering approach for high dimensional ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.

  14. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  15. On Robust Information Extraction from High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science

  16. Inference in High-dimensional Dynamic Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Tang, Haihan

    We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...

  17. Pricing High-Dimensional American Options Using Local Consistency Conditions

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and

  18. Irregular grid methods for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.

    2004-01-01

    This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of

  19. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  20. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    Science.gov (United States)

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  1. High-Throughput Agrobacterium-mediated Transformation of Medicago Truncatula in Comparison to Two Expression Vectors

    International Nuclear Information System (INIS)

    Sultana, T.; Deeba, F.; Naqvi, S. M. S.

    2016-01-01

    Legumes have been turbulent to efficient Agrobacterium-mediated transformation for a long time. The selection of Medicago truncatula as a model legume plant for molecular analysis resulted in the development of efficient Agrobacterium-mediated transformation protocols. In current study, M. truncatula transformed plants expressing OsRGLP1 were obtained through GATEWAY technology using pGOsRGLP1 (pH7WG2.0=OsRGLP1). The transformation efficiency of this vector was compared with expression vector from pCAMBIA series over-expressing same gene (pCOsRGLP1). A lower number of explants generated hygromycin resistant plantlet for instance, 18.3 with pGOsRGLP1 vector as compared to 35.5 percent with pCOsRGLP1 vector. Transformation efficiency of PCR positive plants generated was 9.4 percent for pGOsRGLP1 while 21.6 percent for pCOsRGLP1. Furthermore 24.4 percent of explants generated antibiotic resistant plantlet on 20 mgl/sup -1/ of hygromycin which was higher than on 15 mgl/sup -1/ of hygromycin such as 12.2 percent. T/sub 1/ progeny analysis indicated that the transgene was inherited in Mendelian manner. The functionally active status of transgene was monitored by high level of Superoxide dismutase (SOD) activity in transformed progeny. (author)

  2. A Fast and High-precision Orientation Algorithm for BeiDou Based on Dimensionality Reduction

    Directory of Open Access Journals (Sweden)

    ZHAO Jiaojiao

    2015-05-01

    Full Text Available A fast and high-precision orientation algorithm for BeiDou is proposed by deeply analyzing the constellation characteristics of BeiDou and GEO satellites features.With the advantage of good east-west geometry, the baseline vector candidate values were solved by the GEO satellites observations combined with the dimensionality reduction theory at first.Then, we use the ambiguity function to judge the values in order to obtain the optical baseline vector and get the wide lane integer ambiguities. On this basis, the B1 ambiguities were solved. Finally, the high-precision orientation was estimated by the determinating B1 ambiguities. This new algorithm not only can improve the ill-condition of traditional algorithm, but also can reduce the ambiguity search region to a great extent, thus calculating the integer ambiguities in a single-epoch.The algorithm is simulated by the actual BeiDou ephemeris and the result shows that the method is efficient and fast for orientation. It is capable of very high single-epoch success rate(99.31% and accurate attitude angle (the standard deviation of pitch and heading is respectively 0.07°and 0.13°in a real time and dynamic environment.

  3. Fetal muscle gene transfer is not enhanced by an RGD capsid modification to high-capacity adenoviral vectors.

    Science.gov (United States)

    Bilbao, R; Reay, D P; Hughes, T; Biermann, V; Volpers, C; Goldberg, L; Bergelson, J; Kochanek, S; Clemens, P R

    2003-10-01

    High levels of alpha(v) integrin expression by fetal muscle suggested that vector re-targeting to integrins could enhance adenoviral vector-mediated transduction, thereby increasing safety and efficacy of muscle gene transfer in utero. High-capacity adenoviral (HC-Ad) vectors modified by an Arg-Gly-Asp (RGD) peptide motif in the HI loop of the adenoviral fiber (RGD-HC-Ad) have demonstrated efficient gene transfer through binding to alpha(v) integrins. To test integrin targeting of HC-Ad vectors for fetal muscle gene transfer, we compared unmodified and RGD-modified HC-Ad vectors. In vivo, unmodified HC-Ad vector transduced fetal mouse muscle with four-fold higher efficiency compared to RGD-HC-Ad vector. Confirming that the difference was due to muscle cell autonomous factors and not mechanical barriers, transduction of primary myogenic cells isolated from murine fetal muscle in vitro demonstrated a three-fold better transduction by HC-Ad vector than by RGD-HC-Ad vector. We hypothesized that the high expression level of coxsackievirus and adenovirus receptor (CAR), demonstrated in fetal muscle cells both in vitro and in vivo, was the crucial variable influencing the relative transduction efficiencies of HC-Ad and RGD-HC-Ad vectors. To explore this further, we studied transduction by HC-Ad and RGD-HC-Ad vectors in paired cell lines that expressed alpha(v) integrins and differed only by the presence or absence of CAR expression. The results increase our understanding of factors that will be important for retargeting HC-Ad vectors to enhance gene transfer to fetal muscle.

  4. Genuinely high-dimensional nonlocality optimized by complementary measurements

    International Nuclear Information System (INIS)

    Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung

    2010-01-01

    Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.

  5. Dynamics of vector dark soliton induced by the Rabi coupling in one-dimensional trapped Bose–Einstein condensates

    International Nuclear Information System (INIS)

    Liu, Chao-Fei; Lu, Min; Liu, Wei-Qing

    2012-01-01

    The Rabi coupling between two components of Bose–Einstein condensates is used to controllably change ordinary dark soliton into dynamic vector dark soliton or ordinary vector dark soliton. When all inter- and intraspecies interactions are equal, the dynamic vector dark soliton is exactly constructed by two sub-dark-solitons, which oscillate with the same velocity and periodically convert with each other. When the interspecies interactions deviate from the intraspecies ones, the whole soliton can maintain its essential shape, but the sub-dark-soliton becomes inexact or is broken. This study indicates that the Rabi coupling can be used to obtain various vector dark solitons. -- Highlights: ► We consider the Rabi coupling to affect the dark soliton in BECs. ► We examine the changes of the initial dark solitons. ► The structure of the soliton depends on the inter- and intraspecies interactions strength. ► The Rabi coupling can be used to obtain various vector dark solitons.

  6. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  7. Vorticity vector-potential method based on time-dependent curvilinear coordinates for two-dimensional rotating flows in closed configurations

    Science.gov (United States)

    Fu, Yuan; Zhang, Da-peng; Xie, Xi-lin

    2018-04-01

    In this study, a vorticity vector-potential method for two-dimensional viscous incompressible rotating driven flows is developed in the time-dependent curvilinear coordinates. The method is applicable in both inertial and non-inertial frames of reference with the advantage of a fixed and regular calculation domain. The numerical method is applied to triangle and curved triangle configurations in constant and varying rotational angular velocity cases respectively. The evolutions of flow field are studied. The geostrophic effect, unsteady effect and curvature effect on the evolutions are discussed.

  8. Generation of High-order Group-velocity-locked Vector Solitons

    OpenAIRE

    Jin, X. X.; Wu, Z. C.; Zhang, Q.; Li, L.; Tang, D. Y.; Shen, D. Y.; Fu, S. N.; Liu, D. M.; Zhao, L. M.

    2015-01-01

    We report numerical simulations on the high-order group-velocity-locked vector soliton (GVLVS) generation based on the fundamental GVLVS. The high-order GVLVS generated is characterized with a two-humped pulse along one polarization while a single-humped pulse along the orthogonal polarization. The phase difference between the two humps could be 180 degree. It is found that by appropriate setting the time separation between the two components of the fundamental GVLVS, the high-order GVLVS wit...

  9. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  10. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...

  11. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  12. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  13. Investigating the Magnetic Imprints of Major Solar Eruptions with SDO /HMI High-cadence Vector Magnetograms

    Energy Technology Data Exchange (ETDEWEB)

    Sun Xudong; Hoeksema, J. Todd; Liu Yang; Chen Ruizhu [W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States); Kazachenko, Maria, E-mail: xudong@Sun.stanford.edu [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States)

    2017-04-10

    The solar active region photospheric magnetic field evolves rapidly during major eruptive events, suggesting appreciable feedback from the corona. Previous studies of these “magnetic imprints” are mostly based on line of sight only or lower-cadence vector observations; a temporally resolved depiction of the vector field evolution is hitherto lacking. Here, we introduce the high-cadence (90 s or 135 s) vector magnetogram data set from the Helioseismic and Magnetic Imager, which is well suited for investigating the phenomenon. These observations allow quantitative characterization of the permanent, step-like changes that are most pronounced in the horizontal field component (B {sub h}). A highly structured pattern emerges from analysis of an archetypical event, SOL2011-02-15T01:56, where B {sub h} near the main polarity inversion line increases significantly during the earlier phase of the associated flare with a timescale of several minutes, while B {sub h} in the periphery decreases at later times with smaller magnitudes and a slightly longer timescale. The data set also allows effective identification of the “magnetic transient” artifact, where enhanced flare emission alters the Stokes profiles and the inferred magnetic field becomes unreliable. Our results provide insights on the momentum processes in solar eruptions. The data set may also be useful to the study of sunquakes and data-driven modeling of the corona.

  14. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  15. High-dimensional change-point estimation: Combining filtering with convex optimization

    OpenAIRE

    Soh, Yong Sheng; Chandrasekaran, Venkat

    2017-01-01

    We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...

  16. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  17. Manifold learning to interpret JET high-dimensional operational space

    International Nuclear Information System (INIS)

    Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A

    2013-01-01

    In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)

  18. Performance and optimization of support vector machines in high-energy physics classification problems

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, M.Oe.; Kruecker, D.; Melzer-Pellmann, I.A.

    2016-01-15

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications. A new C++ LIBSVM interface called SVM-HINT is developed and available on Github.

  19. Performance and optimization of support vector machines in high-energy physics classification problems

    International Nuclear Information System (INIS)

    Sahin, M.Oe.; Kruecker, D.; Melzer-Pellmann, I.A.

    2016-01-01

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications. A new C++ LIBSVM interface called SVM-HINT is developed and available on Github.

  20. Elucidating high-dimensional cancer hallmark annotation via enriched ontology.

    Science.gov (United States)

    Yan, Shankai; Wong, Ka-Chun

    2017-09-01

    Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Principal Components of Superhigh-Dimensional Statistical Features and Support Vector Machine for Improving Identification Accuracies of Different Gear Crack Levels under Different Working Conditions

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available Gears are widely used in gearbox to transmit power from one shaft to another. Gear crack is one of the most frequent gear fault modes found in industry. Identification of different gear crack levels is beneficial in preventing any unexpected machine breakdown and reducing economic loss because gear crack leads to gear tooth breakage. In this paper, an intelligent fault diagnosis method for identification of different gear crack levels under different working conditions is proposed. First, superhigh-dimensional statistical features are extracted from continuous wavelet transform at different scales. The number of the statistical features extracted by using the proposed method is 920 so that the extracted statistical features are superhigh dimensional. To reduce the dimensionality of the extracted statistical features and generate new significant low-dimensional statistical features, a simple and effective method called principal component analysis is used. To further improve identification accuracies of different gear crack levels under different working conditions, support vector machine is employed. Three experiments are investigated to show the superiority of the proposed method. Comparisons with other existing gear crack level identification methods are conducted. The results show that the proposed method has the highest identification accuracies among all existing methods.

  2. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    Science.gov (United States)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  3. The zero-dimensional O(N) vector model as a benchmark for perturbation theory, the large-N expansion and the functional renormalization group

    International Nuclear Information System (INIS)

    Keitel, Jan; Bartosch, Lorenz

    2012-01-01

    We consider the zero-dimensional O(N) vector model as a simple example to calculate n-point correlation functions using perturbation theory, the large-N expansion and the functional renormalization group (FRG). Comparing our findings with exact results, we show that perturbation theory breaks down for moderate interactions for all N, as one should expect. While the interaction-induced shift of the free energy and the self-energy are well described by the large-N expansion even for small N, this is not the case for higher order correlation functions. However, using the FRG in its one-particle irreducible formalism, we see that very few running couplings suffice to get accurate results for arbitrary N in the strong coupling regime, outperforming the large-N expansion for small N. We further remark on how the derivative expansion, a well-known approximation strategy for the FRG, reduces to an exact method for the zero-dimensional O(N) vector model. (paper)

  4. A model for soft high-energy scattering: Tensor pomeron and vector odderon

    Energy Technology Data Exchange (ETDEWEB)

    Ewerz, Carlo, E-mail: C.Ewerz@thphys.uni-heidelberg.de [Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung, Planckstraße 1, D-64291 Darmstadt (Germany); Maniatis, Markos, E-mail: mmaniatis@ubiobio.cl [Departamento de Ciencias Básicas, Universidad del Bío-Bío, Avda. Andrés Bello s/n, Casilla 447, Chillán 3780000 (Chile); Nachtmann, Otto, E-mail: O.Nachtmann@thphys.uni-heidelberg.de [Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, D-69120 Heidelberg (Germany)

    2014-03-15

    A model for soft high-energy scattering is developed. The model is formulated in terms of effective propagators and vertices for the exchange objects: the pomeron, the odderon, and the reggeons. The vertices are required to respect standard rules of QFT. The propagators are constructed taking into account the crossing properties of amplitudes in QFT and the power-law ansätze from the Regge model. We propose to describe the pomeron as an effective spin 2 exchange. This tensor pomeron gives, at high energies, the same results for the pp and pp{sup -bar} elastic amplitudes as the standard Donnachie–Landshoff pomeron. But with our tensor pomeron it is much more natural to write down effective vertices of all kinds which respect the rules of QFT. This is particularly clear for the coupling of the pomeron to particles carrying spin, for instance vector mesons. We describe the odderon as an effective vector exchange. We emphasise that with a tensor pomeron and a vector odderon the corresponding charge-conjugation relations are automatically fulfilled. We compare the model to some experimental data, in particular to data for the total cross sections, in order to determine the model parameters. The model should provide a starting point for a general framework for describing soft high-energy reactions. It should give to experimentalists an easily manageable tool for calculating amplitudes for such reactions and for obtaining predictions which can be compared in detail with data. -- Highlights: •A general model for soft high-energy hadron scattering is developed. •The pomeron is described as effective tensor exchange. •Explicit expressions for effective reggeon–particle vertices are given. •Reggeon–particle and particle–particle vertices are related. •All vertices respect the standard C parity and crossing rules of QFT.

  5. The validation and assessment of machine learning: a game of prediction from high-dimensional data.

    Directory of Open Access Journals (Sweden)

    Tune H Pers

    Full Text Available In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often implies that multiple methods are tested and compared on the same set of data. This is particularly difficult in situations that are prone to over-fitting where the number of subjects is low compared to the number of potential predictors. The article presents a game which provides some grounds for conducting a fair model comparison. Each player selects a modeling strategy for predicting individual response from potential predictors. A strictly proper scoring rule, bootstrap cross-validation, and a set of rules are used to make the results obtained with different strategies comparable. To illustrate the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively.

  6. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  7. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    Science.gov (United States)

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  9. Performance and optimization of support vector machines in high-energy physics classification problems

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, Mehmet Oezguer; Kruecker, Dirk; Melzer-Pellmann, Isabell [DESY, Hamburg (Germany)

    2016-07-01

    In this talk, the use of Support Vector Machines (SVM) is promoted for new-physics searches in high-energy physics. We developed an interface, called SVM HEP Interface (SVM-HINT), for a popular SVM library, LibSVM, and introduced a statistical-significance based hyper-parameter optimization algorithm for the new-physics searches. As example case study, a search for Supersymmetry at the Large Hadron Collider is given to demonstrate the capabilities of SVM using SVM-HINT.

  10. Hawking radiation of a high-dimensional rotating black hole

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)

    2010-01-15

    We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)

  11. High-dimensional quantum channel estimation using classical light

    CSIR Research Space (South Africa)

    Mabena, Chemist M

    2017-11-01

    Full Text Available stream_source_info Mabena_20007_2017.pdf.txt stream_content_type text/plain stream_size 960 Content-Encoding UTF-8 stream_name Mabena_20007_2017.pdf.txt Content-Type text/plain; charset=UTF-8 PHYSICAL REVIEW A 96, 053860... (2017) High-dimensional quantum channel estimation using classical light Chemist M. Mabena CSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa and School of Physics, University of the Witwatersrand, Johannesburg 2000, South...

  12. A novel and highly efficient production system for recombinant adeno-associated virus vector.

    Science.gov (United States)

    Wu, Zhijian; Wu, Xiaobing; Cao, Hui; Dong, Xiaoyan; Wang, Hong; Hou, Yunde

    2002-02-01

    Recombinant adeno-associated virus (rAAV) has proven to be a promising gene delivery vector for human gene therapy. However, its application has been limited by difficulty in obtaining enough quantities of high-titer vector stocks. In this paper, a novel and highly efficient production system for rAAV is described. A recombinant herpes simplex virus type 1 (rHSV-1) designated HSV1-rc/DeltaUL2, which expressed adeno-associated virus type2 (AAV-2) Rep and Cap proteins, was constructed previously. The data confirmed that its functions were to support rAAV replication and packaging, and the generated rAAV was infectious. Meanwhile, an rAAV proviral cell line designated BHK/SG2, which carried the green fluorescent protein (GFP) gene expression cassette, was established by transfecting BHK-21 cells with rAAV vector plasmid pSNAV-2-GFP. Infecting BHK/SG2 with HSV1-rc/DeltaUL2 at an MOI of 0.1 resulted in the optimal yields of rAAV, reaching 250 transducing unit (TU) or 4.28x10(4) particles per cell. Therefore, compared with the conventional transfection method, the yield of rAAV using this "one proviral cell line, one helper virus" strategy was increased by two orders of magnitude. Large-scale production of rAAV can be easily achieved using this strategy and might meet the demands for clinical trials of rAAV-mediated gene therapy.

  13. High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems

    International Nuclear Information System (INIS)

    Wachowiak, M P; Sarlo, B B; Foster, A E Lambe

    2014-01-01

    Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task

  14. On High Dimensional Searching Spaces and Learning Methods

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz

    2017-01-01

    , and similarity functions and discuss the pros and cons of using each of them. Conventional similarity functions evaluate objects in the vector space. Contrarily, Weighted Feature Distance (WFD) functions compare data objects in both feature and vector spaces, preventing the system from being affected by some...

  15. Simulations of dimensionally reduced effective theories of high temperature QCD

    CERN Document Server

    Hietanen, Ari

    Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by perf...

  16. High-Dimensional Quantum Information Processing with Linear Optics

    Science.gov (United States)

    Fitzpatrick, Casey A.

    Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for

  17. A vector/parallel method for a three-dimensional transport model coupled with bio-chemical terms

    NARCIS (Netherlands)

    B.P. Sommeijer (Ben); J. Kok (Jan)

    1995-01-01

    textabstractA so-called fractional step method is considered for the time integration of a three-dimensional transport-chemical model in shallow seas. In this method, the transport part and the chemical part are treated separately by appropriate integration techniques. This separation is motivated

  18. High-dimensional single-cell cancer biology.

    Science.gov (United States)

    Irish, Jonathan M; Doxie, Deon B

    2014-01-01

    Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.

  19. Automatic Sleep Staging using Multi-dimensional Feature Extraction and Multi-kernel Fuzzy Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yanjun Zhang

    2014-01-01

    Full Text Available This paper employed the clinical Polysomnographic (PSG data, mainly including all-night Electroencephalogram (EEG, Electrooculogram (EOG and Electromyogram (EMG signals of subjects, and adopted the American Academy of Sleep Medicine (AASM clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as clinical experience. By adopting sleep samples self-learning, the linear combination of weights and parameters of multiple kernels of the fuzzy support vector machine (FSVM were learned and the multi-kernel FSVM (MK-FSVM was constructed. The overall agreement between the experts' scores and the results presented was 82.53%. Compared with previous results, the accuracy of N1 was improved to some extent while the accuracies of other stages were approximate, which well reflected the sleep structure. The staging algorithm proposed in this paper is transparent, and worth further investigation.

  20. A Transverse Oscillation Approach for Estimation of Three-Dimensional Velocity Vectors, Part I: Concept and Simulation Study

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes; Jensen, Jørgen Arendt

    2014-01-01

    A method for 3-D velocity vector estimation us - ing transverse oscillations is presented. The method employs a 2-D transducer and decouples the velocity estimation into three orthogonal components, which are estimated simultane - ously and from the same data. The validity of the method is invest......A method for 3-D velocity vector estimation us - ing transverse oscillations is presented. The method employs a 2-D transducer and decouples the velocity estimation into three orthogonal components, which are estimated simultane - ously and from the same data. The validity of the method...... is investigated by conducting simulations emulating a 32 × 32 matrix transducer. The results are evaluated using two per - formance metrics related to precision and accuracy. The study includes several parameters including 49 flow directions, the SNR, steering angle, and apodization types. The 49 flow direc...... - tions cover the positive octant of the unit sphere. In terms of accuracy, the median bias is −2%. The precision of v x and v y depends on the flow angle β and ranges from 5% to 31% rela - tive to the peak velocity magnitude of 1 m/s. For comparison, the range is 0.4 to 2% for v z . The parameter study...

  1. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  2. A static investigation of the thrust vectoring system of the F/A-18 high-alpha research vehicle

    Science.gov (United States)

    Mason, Mary L.; Capone, Francis J.; Asbury, Scott C.

    1992-01-01

    A static (wind-off) test was conducted in the static test facility of the Langley 16-foot Transonic Tunnel to evaluate the vectoring capability and isolated nozzle performance of the proposed thrust vectoring system of the F/A-18 high alpha research vehicle (HARV). The thrust vectoring system consisted of three asymmetrically spaced vanes installed externally on a single test nozzle. Two nozzle configurations were tested: A maximum afterburner-power nozzle and a military-power nozzle. Vane size and vane actuation geometry were investigated, and an extensive matrix of vane deflection angles was tested. The nozzle pressure ratios ranged from two to six. The results indicate that the three vane system can successfully generate multiaxis (pitch and yaw) thrust vectoring. However, large resultant vector angles incurred large thrust losses. Resultant vector angles were always lower than the vane deflection angles. The maximum thrust vectoring angles achieved for the military-power nozzle were larger than the angles achieved for the maximum afterburner-power nozzle.

  3. Class prediction for high-dimensional class-imbalanced data

    Directory of Open Access Journals (Sweden)

    Lusa Lara

    2010-10-01

    Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class

  4. Vector soup: high-throughput identification of Neotropical phlebotomine sand flies using metabarcoding.

    Science.gov (United States)

    Kocher, Arthur; Gantier, Jean-Charles; Gaborit, Pascal; Zinger, Lucie; Holota, Helene; Valiere, Sophie; Dusfour, Isabelle; Girod, Romain; Bañuls, Anne-Laure; Murienne, Jerome

    2017-03-01

    Phlebotomine sand flies are haematophagous dipterans of primary medical importance. They represent the only proven vectors of leishmaniasis worldwide and are involved in the transmission of various other pathogens. Studying the ecology of sand flies is crucial to understand the epidemiology of leishmaniasis and further control this disease. A major limitation in this regard is that traditional morphological-based methods for sand fly species identifications are time-consuming and require taxonomic expertise. DNA metabarcoding holds great promise in overcoming this issue by allowing the identification of multiple species from a single bulk sample. Here, we assessed the reliability of a short insect metabarcode located in the mitochondrial 16S rRNA for the identification of Neotropical sand flies, and constructed a reference database for 40 species found in French Guiana. Then, we conducted a metabarcoding experiment on sand flies mixtures of known content and showed that the method allows an accurate identification of specimens in pools. Finally, we applied metabarcoding to field samples caught in a 1-ha forest plot in French Guiana. Besides providing reliable molecular data for species-level assignations of phlebotomine sand flies, our study proves the efficiency of metabarcoding based on the mitochondrial 16S rRNA for studying sand fly diversity from bulk samples. The application of this high-throughput identification procedure to field samples can provide great opportunities for vector monitoring and eco-epidemiological studies. © 2016 John Wiley & Sons Ltd.

  5. ESPRIT-Like Two-Dimensional DOA Estimation for Monostatic MIMO Radar with Electromagnetic Vector Received Sensors under the Condition of Gain and Phase Uncertainties and Mutual Coupling.

    Science.gov (United States)

    Zhang, Dong; Zhang, Yongshun; Zheng, Guimei; Feng, Cunqian; Tang, Jun

    2017-10-26

    In this paper, we focus on the problem of two-dimensional direction of arrival (2D-DOA) estimation for monostatic MIMO Radar with electromagnetic vector received sensors (MIMO-EMVSs) under the condition of gain and phase uncertainties (GPU) and mutual coupling (MC). GPU would spoil the invariance property of the EMVSs in MIMO-EMVSs, thus the effective ESPRIT algorithm unable to be used directly. Then we put forward a C-SPD ESPRIT-like algorithm. It estimates the 2D-DOA and polarization station angle (PSA) based on the instrumental sensors method (ISM). The C-SPD ESPRIT-like algorithm can obtain good angle estimation accuracy without knowing the GPU. Furthermore, it can be applied to arbitrary array configuration and has low complexity for avoiding the angle searching procedure. When MC and GPU exist together between the elements of EMVSs, in order to make our algorithm feasible, we derive a class of separated electromagnetic vector receiver and give the S-SPD ESPRIT-like algorithm. It can solve the problem of GPU and MC efficiently. And the array configuration can be arbitrary. The effectiveness of our proposed algorithms is verified by the simulation result.

  6. A report on the study of algorithms to enhance Vector computer performance for the discretized one-dimensional time-dependent heat conduction equation: EPIC research, Phase 1

    International Nuclear Information System (INIS)

    Majumdar, A.; Makowitz, H.

    1987-10-01

    With the development of modern vector/parallel supercomputers and their lower performance clones it has become possible to increase computational performance by several orders of magnitude when comparing to the previous generation of scalar computers. These performance gains are not observed when production versions of current thermal-hydraulic codes are implemented on modern supercomputers. It is our belief that this is due in part to the inappropriateness of using old thermal-hydraulic algorithms with these new computer architectures. We believe that a new generation of algorithms needs to be developed for thermal-hydraulics simulation that is optimized for vector/parallel architectures, and not the scalar computers of the previous generation. We have begun a study that will investigate several approaches for designing such optimal algorithms. These approaches are based on the following concepts: minimize recursion; utilize predictor-corrector iterative methods; maximize the convergence rate of iterative methods used; use physical approximations as well as numerical means to accelerate convergence; utilize explicit methods (i.e., marching) where stability will permit. We call this approach the ''EPIC'' methodology (i.e., Explicit Predictor Iterative Corrector methods). Utilizing the above ideas, we have begun our work by investigating the one-dimensional transient heat conduction equation. We have developed several algorithms based on variations of the Hopscotch concept, which we discuss in the body of this report. 14 refs

  7. ESPRIT-Like Two-Dimensional DOA Estimation for Monostatic MIMO Radar with Electromagnetic Vector Received Sensors under the Condition of Gain and Phase Uncertainties and Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-10-01

    Full Text Available In this paper, we focus on the problem of two-dimensional direction of arrival (2D-DOA estimation for monostatic MIMO Radar with electromagnetic vector received sensors (MIMO-EMVSs under the condition of gain and phase uncertainties (GPU and mutual coupling (MC. GPU would spoil the invariance property of the EMVSs in MIMO-EMVSs, thus the effective ESPRIT algorithm unable to be used directly. Then we put forward a C-SPD ESPRIT-like algorithm. It estimates the 2D-DOA and polarization station angle (PSA based on the instrumental sensors method (ISM. The C-SPD ESPRIT-like algorithm can obtain good angle estimation accuracy without knowing the GPU. Furthermore, it can be applied to arbitrary array configuration and has low complexity for avoiding the angle searching procedure. When MC and GPU exist together between the elements of EMVSs, in order to make our algorithm feasible, we derive a class of separated electromagnetic vector receiver and give the S-SPD ESPRIT-like algorithm. It can solve the problem of GPU and MC efficiently. And the array configuration can be arbitrary. The effectiveness of our proposed algorithms is verified by the simulation result.

  8. High-dimensional quantum cryptography with twisted light

    International Nuclear Information System (INIS)

    Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J

    2015-01-01

    Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)

  9. Vector form Intrinsic Finite Element Method for the Two-Dimensional Analysis of Marine Risers with Large Deformations

    Science.gov (United States)

    Li, Xiaomin; Guo, Xueli; Guo, Haiyan

    2018-06-01

    Robust numerical models that describe the complex behaviors of risers are needed because these constitute dynamically sensitive systems. This paper presents a simple and efficient algorithm for the nonlinear static and dynamic analyses of marine risers. The proposed approach uses the vector form intrinsic finite element (VFIFE) method, which is based on vector mechanics theory and numerical calculation. In this method, the risers are described by a set of particles directly governed by Newton's second law and are connected by weightless elements that can only resist internal forces. The method does not require the integration of the stiffness matrix, nor does it need iterations to solve the governing equations. Due to these advantages, the method can easily increase or decrease the element and change the boundary conditions, thus representing an innovative concept of solving nonlinear behaviors, such as large deformation and large displacement. To prove the feasibility of the VFIFE method in the analysis of the risers, rigid and flexible risers belonging to two different categories of marine risers, which usually have differences in modeling and solving methods, are employed in the present study. In the analysis, the plane beam element is adopted in the simulation of interaction forces between the particles and the axial force, shear force, and bending moment are also considered. The results are compared with the conventional finite element method (FEM) and those reported in the related literature. The findings revealed that both the rigid and flexible risers could be modeled in a similar unified analysis model and that the VFIFE method is feasible for solving problems related to the complex behaviors of marine risers.

  10. Robust Pseudo-Hierarchical Support Vector Clustering

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Sjöstrand, Karl; Olafsdóttir, Hildur

    2007-01-01

    Support vector clustering (SVC) has proven an efficient algorithm for clustering of noisy and high-dimensional data sets, with applications within many fields of research. An inherent problem, however, has been setting the parameters of the SVC algorithm. Using the recent emergence of a method...... for calculating the entire regularization path of the support vector domain description, we propose a fast method for robust pseudo-hierarchical support vector clustering (HSVC). The method is demonstrated to work well on generated data, as well as for detecting ischemic segments from multidimensional myocardial...

  11. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    Science.gov (United States)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  12. Brane vector phenomenology

    International Nuclear Information System (INIS)

    Clark, T.E.; Love, S.T.; Nitta, Muneto; Veldhuis, T. ter; Xiong, C.

    2009-01-01

    Local oscillations of the brane world are manifested as massive vector fields. Their coupling to the Standard Model can be obtained using the method of nonlinear realizations of the spontaneously broken higher-dimensional space-time symmetries, and to an extent, are model independent. Phenomenological limits on these vector field parameters are obtained using LEP collider data and dark matter constraints

  13. Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian

    2011-01-01

    The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...

  14. Quantum correlation of high dimensional system in a dephasing environment

    Science.gov (United States)

    Ji, Yinghua; Ke, Qiang; Hu, Juju

    2018-05-01

    For a high dimensional spin-S system embedded in a dephasing environment, we theoretically analyze the time evolutions of quantum correlation and entanglement via Frobenius norm and negativity. The quantum correlation dynamics can be considered as a function of the decoherence parameters, including the ratio between the system oscillator frequency ω0 and the reservoir cutoff frequency ωc , and the different environment temperature. It is shown that the quantum correlation can not only measure nonclassical correlation of the considered system, but also perform a better robustness against the dissipation. In addition, the decoherence presents the non-Markovian features and the quantum correlation freeze phenomenon. The former is much weaker than that in the sub-Ohmic or Ohmic thermal reservoir environment.

  15. Evaluating Clustering in Subspace Projections of High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Günnemann, Stephan; Assent, Ira

    2009-01-01

    Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...

  16. Statistical mechanics of complex neural systems and high dimensional data

    International Nuclear Information System (INIS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-01-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. (paper)

  17. Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning

    Science.gov (United States)

    Sagun, Levent

    This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would

  18. Urban air quality forecasting based on multi-dimensional collaborative Support Vector Regression (SVR): A case study of Beijing-Tianjin-Shijiazhuang.

    Science.gov (United States)

    Liu, Bing-Chun; Binaykia, Arihant; Chang, Pei-Chann; Tiwari, Manoj Kumar; Tsao, Cheng-Chin

    2017-01-01

    Today, China is facing a very serious issue of Air Pollution due to its dreadful impact on the human health as well as the environment. The urban cities in China are the most affected due to their rapid industrial and economic growth. Therefore, it is of extreme importance to come up with new, better and more reliable forecasting models to accurately predict the air quality. This paper selected Beijing, Tianjin and Shijiazhuang as three cities from the Jingjinji Region for the study to come up with a new model of collaborative forecasting using Support Vector Regression (SVR) for Urban Air Quality Index (AQI) prediction in China. The present study is aimed to improve the forecasting results by minimizing the prediction error of present machine learning algorithms by taking into account multiple city multi-dimensional air quality information and weather conditions as input. The results show that there is a decrease in MAPE in case of multiple city multi-dimensional regression when there is a strong interaction and correlation of the air quality characteristic attributes with AQI. Also, the geographical location is found to play a significant role in Beijing, Tianjin and Shijiazhuang AQI prediction.

  19. Vectorization, parallelization and porting of nuclear codes. Vectorization and parallelization. Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-02-01

    Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)

  20. Local Patch Vectors Encoded by Fisher Vectors for Image Classification

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2018-02-01

    Full Text Available The objective of this work is image classification, whose purpose is to group images into corresponding semantic categories. Four contributions are made as follows: (i For computational simplicity and efficiency, we directly adopt raw image patch vectors as local descriptors encoded by Fisher vector (FV subsequently; (ii For obtaining representative local features within the FV encoding framework, we compare and analyze three typical sampling strategies: random sampling, saliency-based sampling and dense sampling; (iii In order to embed both global and local spatial information into local features, we construct an improved spatial geometry structure which shows good performance; (iv For reducing the storage and CPU costs of high dimensional vectors, we adopt a new feature selection method based on supervised mutual information (MI, which chooses features by an importance sorting algorithm. We report experimental results on dataset STL-10. It shows very promising performance with this simple and efficient framework compared to conventional methods.

  1. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus

    2013-11-12

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  2. Design guidelines for high dimensional stability of CFRP optical bench

    Science.gov (United States)

    Desnoyers, Nichola; Boucher, Marc-André; Goyette, Philippe

    2013-09-01

    In carbon fiber reinforced plastic (CFRP) optomechanical structures, particularly when embodying reflective optics, angular stability is critical. Angular stability or warping stability is greatly affected by moisture absorption and thermal gradients. Unfortunately, it is impossible to achieve the perfect laminate and there will always be manufacturing errors in trying to reach a quasi-iso laminate. Some errors, such as those related to the angular position of each ply and the facesheet parallelism (for a bench) can be easily monitored in order to control the stability more adequately. This paper presents warping experiments and finite-element analyses (FEA) obtained from typical optomechanical sandwich structures. Experiments were done using a thermal vacuum chamber to cycle the structures from -40°C to 50°C. Moisture desorption tests were also performed for a number of specific configurations. The selected composite material for the study is the unidirectional prepreg from Tencate M55J/TC410. M55J is a high modulus fiber and TC410 is a new-generation cyanate ester designed for dimensionally stable optical benches. In the studied cases, the main contributors were found to be: the ply angular errors, laminate in-plane parallelism (between 0° ply direction of both facesheets), fiber volume fraction tolerance and joints. Final results show that some tested configurations demonstrated good warping stability. FEA and measurements are in good agreement despite the fact that some defects or fabrication errors remain unpredictable. Design guidelines to maximize the warping stability by taking into account the main dimensional stability contributors, the bench geometry and the optical mount interface are then proposed.

  3. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus; Dahmen, Wolfgang; DeVore, Ronald; Grasedyck, Lars

    2013-01-01

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called 'curse of dimensionality'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  4. An Investigation of the High Efficiency Estimation Approach of the Large-Scale Scattered Point Cloud Normal Vector

    Directory of Open Access Journals (Sweden)

    Xianglin Meng

    2018-03-01

    Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.

  5. VISPA2: a scalable pipeline for high-throughput identification and annotation of vector integration sites.

    Science.gov (United States)

    Spinozzi, Giulio; Calabria, Andrea; Brasca, Stefano; Beretta, Stefano; Merelli, Ivan; Milanesi, Luciano; Montini, Eugenio

    2017-11-25

    Bioinformatics tools designed to identify lentiviral or retroviral vector insertion sites in the genome of host cells are used to address the safety and long-term efficacy of hematopoietic stem cell gene therapy applications and to study the clonal dynamics of hematopoietic reconstitution. The increasing number of gene therapy clinical trials combined with the increasing amount of Next Generation Sequencing data, aimed at identifying integration sites, require both highly accurate and efficient computational software able to correctly process "big data" in a reasonable computational time. Here we present VISPA2 (Vector Integration Site Parallel Analysis, version 2), the latest optimized computational pipeline for integration site identification and analysis with the following features: (1) the sequence analysis for the integration site processing is fully compliant with paired-end reads and includes a sequence quality filter before and after the alignment on the target genome; (2) an heuristic algorithm to reduce false positive integration sites at nucleotide level to reduce the impact of Polymerase Chain Reaction or trimming/alignment artifacts; (3) a classification and annotation module for integration sites; (4) a user friendly web interface as researcher front-end to perform integration site analyses without computational skills; (5) the time speedup of all steps through parallelization (Hadoop free). We tested VISPA2 performances using simulated and real datasets of lentiviral vector integration sites, previously obtained from patients enrolled in a hematopoietic stem cell gene therapy clinical trial and compared the results with other preexisting tools for integration site analysis. On the computational side, VISPA2 showed a > 6-fold speedup and improved precision and recall metrics (1 and 0.97 respectively) compared to previously developed computational pipelines. These performances indicate that VISPA2 is a fast, reliable and user-friendly tool for

  6. Asymptomatic dogs are highly competent to transmit Leishmania (Leishmania) infantum chagasi to the natural vector.

    Science.gov (United States)

    Laurenti, Márcia Dalastra; Rossi, Claudio Nazaretian; da Matta, Vânia Lúcia Ribeiro; Tomokane, Thaise Yumie; Corbett, Carlos Eduardo Pereira; Secundino, Nágila Francinete Costa; Pimenta, Paulo Filemon Paulocci; Marcondes, Mary

    2013-09-23

    We evaluated the ability of dogs naturally infected with Leishmania (Leishmania) infantum chagasi to transfer the parasite to the vector and the factors associated with transmission. Thirty-eight infected dogs were confirmed to be infected by direct observation of Leishmania in lymph node smears. Dogs were grouped according to external clinical signs and laboratory data into symptomatic (n=24) and asymptomatic (n=14) animals. All dogs were sedated and submitted to xenodiagnosis with F1-laboratory-reared Lutzomyia longipalpis. After blood digestion, sand flies were dissected and examined for the presence of promastigotes. Following canine euthanasia, fragments of skin, lymph nodes, and spleen were collected and processed using immunohistochemistry to evaluate tissue parasitism. Specific antibodies were detected using an enzyme-linked immunosorbent assay. Antibody levels were found to be higher in symptomatic dogs compared to asymptomatic dogs (p=0.0396). Both groups presented amastigotes in lymph nodes, while skin parasitism was observed in only 58.3% of symptomatic and in 35.7% of asymptomatic dogs. Parasites were visualized in the spleens of 66.7% and 71.4% of symptomatic and asymptomatic dogs, respectively. Parasite load varied from mild to intense, and was not significantly different between groups. All asymptomatic dogs except for one (93%) were competent to transmit Leishmania to the vector, including eight (61.5%) without skin parasitism. Sixteen symptomatic animals (67%) infected sand flies; six (37.5%) showed no amastigotes in the skin. Skin parasitism was not crucial for the ability to infect Lutzomyia longipalpis but the presence of Leishmania in lymph nodes was significantly related to a positive xenodiagnosis. Additionally, a higher proportion of infected vectors that fed on asymptomatic dogs was observed (p=0.0494). Clinical severity was inversely correlated with the infection rate of sand flies (p=0.027) and was directly correlated with antibody

  7. Multi-SOM: an Algorithm for High-Dimensional, Small Size Datasets

    Directory of Open Access Journals (Sweden)

    Shen Lu

    2013-04-01

    Full Text Available Since it takes time to do experiments in bioinformatics, biological datasets are sometimes small but with high dimensionality. From probability theory, in order to discover knowledge from a set of data, we have to have a sufficient number of samples. Otherwise, the error bounds can become too large to be useful. For the SOM (Self- Organizing Map algorithm, the initial map is based on the training data. In order to avoid the bias caused by the insufficient training data, in this paper we present an algorithm, called Multi-SOM. Multi-SOM builds a number of small self-organizing maps, instead of just one big map. Bayesian decision theory is used to make the final decision among similar neurons on different maps. In this way, we can better ensure that we can get a real random initial weight vector set, the map size is less of consideration and errors tend to average out. In our experiments as applied to microarray datasets which are highly intense data composed of genetic related information, the precision of Multi-SOMs is 10.58% greater than SOMs, and its recall is 11.07% greater than SOMs. Thus, the Multi-SOMs algorithm is practical.

  8. A qualitative numerical study of high dimensional dynamical systems

    Science.gov (United States)

    Albers, David James

    Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional

  9. Digitally controlled high-performance dc SQUID readout electronics for a 304-channel vector magnetometer

    Science.gov (United States)

    Bechstein, S.; Petsche, F.; Scheiner, M.; Drung, D.; Thiel, F.; Schnabel, A.; Schurig, Th

    2006-06-01

    Recently, we have developed a family of dc superconducting quantum interference device (SQUID) readout electronics for several applications. These electronics comprise a low-noise preamplifier followed by an integrator, and an analog SQUID bias circuit. A highly-compact low-power version with a flux-locked loop bandwidth of 0.3 MHz and a white noise level of 1 nV/√Hz was specially designed for a 304-channel low-Tc dc SQUID vector magnetometer, intended to operate in the new Berlin Magnetically Shielded Room (BMSR-2). In order to minimize the space needed to mount the electronics on top of the dewar and to minimize the power consumption, we have integrated four electronics channels on one 3 cm × 10 cm sized board. Furthermore we embedded the analog components of these four channels into a digitally controlled system including an in-system programmable microcontroller. Four of these integrated boards were combined to one module with a size of 4 cm × 4 cm × 16 cm. 19 of these modules were implemented, resulting in a total power consumption of about 61 W. To initialize the 304 channels and to service the system we have developed software tools running on a laptop computer. By means of these software tools the microcontrollers are fed with all required data such as the working points, the characteristic parameters of the sensors (noise, voltage swing), or the sensor position inside of the vector magnetometer system. In this paper, the developed electronics including the software tools are described, and first results are presented.

  10. Digitally controlled high-performance dc SQUID readout electronics for a 304-channel vector magnetometer

    International Nuclear Information System (INIS)

    Bechstein, S; Petsche, F; Scheiner, M; Drung, D; Thiel, F; Schnabel, A; Schurig, Th

    2006-01-01

    Recently, we have developed a family of dc superconducting quantum interference device (SQUID) readout electronics for several applications. These electronics comprise a low-noise preamplifier followed by an integrator, and an analog SQUID bias circuit. A highly-compact low-power version with a flux-locked loop bandwidth of 0.3 MHz and a white noise level of 1 nV/√Hz was specially designed for a 304-channel low-T c dc SQUID vector magnetometer, intended to operate in the new Berlin Magnetically Shielded Room (BMSR-2). In order to minimize the space needed to mount the electronics on top of the dewar and to minimize the power consumption, we have integrated four electronics channels on one 3 cm x 10 cm sized board. Furthermore we embedded the analog components of these four channels into a digitally controlled system including an in-system programmable microcontroller. Four of these integrated boards were combined to one module with a size of 4 cm x 4 cm x 16 cm. 19 of these modules were implemented, resulting in a total power consumption of about 61 W. To initialize the 304 channels and to service the system we have developed software tools running on a laptop computer. By means of these software tools the microcontrollers are fed with all required data such as the working points, the characteristic parameters of the sensors (noise, voltage swing), or the sensor position inside of the vector magnetometer system. In this paper, the developed electronics including the software tools are described, and first results are presented

  11. Digitally controlled high-performance dc SQUID readout electronics for a 304-channel vector magnetometer

    Energy Technology Data Exchange (ETDEWEB)

    Bechstein, S [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Petsche, F [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Scheiner, M [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Drung, D [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Thiel, F [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Schnabel, A [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Schurig, Th [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany)

    2006-06-01

    Recently, we have developed a family of dc superconducting quantum interference device (SQUID) readout electronics for several applications. These electronics comprise a low-noise preamplifier followed by an integrator, and an analog SQUID bias circuit. A highly-compact low-power version with a flux-locked loop bandwidth of 0.3 MHz and a white noise level of 1 nV/{radical}Hz was specially designed for a 304-channel low-T{sub c} dc SQUID vector magnetometer, intended to operate in the new Berlin Magnetically Shielded Room (BMSR-2). In order to minimize the space needed to mount the electronics on top of the dewar and to minimize the power consumption, we have integrated four electronics channels on one 3 cm x 10 cm sized board. Furthermore we embedded the analog components of these four channels into a digitally controlled system including an in-system programmable microcontroller. Four of these integrated boards were combined to one module with a size of 4 cm x 4 cm x 16 cm. 19 of these modules were implemented, resulting in a total power consumption of about 61 W. To initialize the 304 channels and to service the system we have developed software tools running on a laptop computer. By means of these software tools the microcontrollers are fed with all required data such as the working points, the characteristic parameters of the sensors (noise, voltage swing), or the sensor position inside of the vector magnetometer system. In this paper, the developed electronics including the software tools are described, and first results are presented.

  12. Progress in high-dimensional percolation and random graphs

    CERN Document Server

    Heydenreich, Markus

    2017-01-01

    This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic.  The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation.  Part III, consist...

  13. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  14. INTERIM ANALYSIS OF THE CONTRIBUTION OF HIGH-LEVEL EVIDENCE FOR DENGUE VECTOR CONTROL.

    Science.gov (United States)

    Horstick, Olaf; Ranzinger, Silvia Runge

    2015-01-01

    This interim analysis reviews the available systematic literature for dengue vector control on three levels: 1) single and combined vector control methods, with existing work on peridomestic space spraying and on Bacillus thuringiensis israelensis; further work is available soon on the use of Temephos, Copepods and larvivorous fish; 2) or for a specific purpose, like outbreak control, and 3) on a strategic level, as for example decentralization vs centralization, with a systematic review on vector control organization. Clear best practice guidelines for methodology of entomological studies are needed. There is a need to include measuring dengue transmission data. The following recommendations emerge: Although vector control can be effective, implementation remains an issue; Single interventions are probably not useful; Combinations of interventions have mixed results; Careful implementation of vector control measures may be most important; Outbreak interventions are often applied with questionable effectiveness.

  15. Vectors and their applications

    CERN Document Server

    Pettofrezzo, Anthony J

    2005-01-01

    Geared toward undergraduate students, this text illustrates the use of vectors as a mathematical tool in plane synthetic geometry, plane and spherical trigonometry, and analytic geometry of two- and three-dimensional space. Its rigorous development includes a complete treatment of the algebra of vectors in the first two chapters.Among the text's outstanding features are numbered definitions and theorems in the development of vector algebra, which appear in italics for easy reference. Most of the theorems include proofs, and coordinate position vectors receive an in-depth treatment. Key concept

  16. On the Zeeman Effect in highly excited atoms: 2. Three-dimensional case

    International Nuclear Information System (INIS)

    Baseia, B.; Medeiros e Silva Filho, J.

    1984-01-01

    A previous result, found in two-dimensional hydrogen-atoms, is extended to the three-dimensional case. A mapping of a four-dimensional space R 4 onto R 3 , that establishes an equivalence between Coulomb and harmonic potentials, is used to show that the exact solution of the Zeeman effect in highly excited atoms, cannot be reached. (Author) [pt

  17. Characterization of highly anisotropic three-dimensionally nanostructured surfaces

    International Nuclear Information System (INIS)

    Schmidt, Daniel

    2014-01-01

    Generalized ellipsometry, a non-destructive optical characterization technique, is employed to determine geometrical structure parameters and anisotropic dielectric properties of highly spatially coherent three-dimensionally nanostructured thin films grown by glancing angle deposition. The (piecewise) homogeneous biaxial layer model approach is discussed, which can be universally applied to model the optical response of sculptured thin films with different geometries and from diverse materials, and structural parameters as well as effective optical properties of the nanostructured thin films are obtained. Alternative model approaches for slanted columnar thin films, anisotropic effective medium approximations based on the Bruggeman formalism, are presented, which deliver results comparable to the homogeneous biaxial layer approach and in addition provide film constituent volume fraction parameters as well as depolarization or shape factors. Advantages of these ellipsometry models are discussed on the example of metal slanted columnar thin films, which have been conformally coated with a thin passivating oxide layer by atomic layer deposition. Furthermore, the application of an effective medium approximation approach to in-situ growth monitoring of this anisotropic thin film functionalization process is presented. It was found that structural parameters determined with the presented optical model equivalents for slanted columnar thin films agree very well with scanning electron microscope image estimates. - Highlights: • Summary of optical model strategies for sculptured thin films with arbitrary geometries • Application of the rigorous anisotropic Bruggeman effective medium applications • In-situ growth monitoring of atomic layer deposition on biaxial metal slanted columnar thin film

  18. Effects of dependence in high-dimensional multiple testing problems

    Directory of Open Access Journals (Sweden)

    van de Wiel Mark A

    2008-02-01

    Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.

  19. Microfluidic engineered high cell density three-dimensional neural cultures

    Science.gov (United States)

    Cullen, D. Kacy; Vukasinovic, Jelena; Glezer, Ari; La Placa, Michelle C.

    2007-06-01

    Three-dimensional (3D) neural cultures with cells distributed throughout a thick, bioactive protein scaffold may better represent neurobiological phenomena than planar correlates lacking matrix support. Neural cells in vivo interact within a complex, multicellular environment with tightly coupled 3D cell-cell/cell-matrix interactions; however, thick 3D neural cultures at cell densities approaching that of brain rapidly decay, presumably due to diffusion limited interstitial mass transport. To address this issue, we have developed a novel perfusion platform that utilizes forced intercellular convection to enhance mass transport. First, we demonstrated that in thick (>500 µm) 3D neural cultures supported by passive diffusion, cell densities =104 cells mm-3), continuous medium perfusion at 2.0-11.0 µL min-1 improved viability compared to non-perfused cultures (p death and matrix degradation. In perfused cultures, survival was dependent on proximity to the perfusion source at 2.00-6.25 µL min-1 (p 90% viability in both neuronal cultures and neuronal-astrocytic co-cultures. This work demonstrates the utility of forced interstitial convection in improving the survival of high cell density 3D engineered neural constructs and may aid in the development of novel tissue-engineered systems reconstituting 3D cell-cell/cell-matrix interactions.

  20. Inference for High-dimensional Differential Correlation Matrices.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  1. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  2. The literary uses of high-dimensional space

    Directory of Open Access Journals (Sweden)

    Ted Underwood

    2015-12-01

    Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.

  3. Unsteady aerodynamic modeling at high angles of attack using support vector machines

    Directory of Open Access Journals (Sweden)

    Wang Qing

    2015-06-01

    Full Text Available Accurate aerodynamic models are the basis of flight simulation and control law design. Mathematically modeling unsteady aerodynamics at high angles of attack bears great difficulties in model structure determination and parameter estimation due to little understanding of the flow mechanism. Support vector machines (SVMs based on statistical learning theory provide a novel tool for nonlinear system modeling. The work presented here examines the feasibility of applying SVMs to high angle-of-attack unsteady aerodynamic modeling field. Mainly, after a review of SVMs, several issues associated with unsteady aerodynamic modeling by use of SVMs are discussed in detail, such as selection of input variables, selection of output variables and determination of SVM parameters. The least squares SVM (LS-SVM models are set up from certain dynamic wind tunnel test data of a delta wing and an aircraft configuration, and then used to predict the aerodynamic responses in other tests. The predictions are in good agreement with the test data, which indicates the satisfying learning and generalization performance of LS-SVMs.

  4. The Dengue Virus Mosquito Vector Aedes aegypti at High Elevation in México

    Science.gov (United States)

    Lozano-Fuentes, Saul; Hayden, Mary H.; Welsh-Rodriguez, Carlos; Ochoa-Martinez, Carolina; Tapia-Santos, Berenice; Kobylinski, Kevin C.; Uejio, Christopher K.; Zielinski-Gutierrez, Emily; Monache, Luca Delle; Monaghan, Andrew J.; Steinhoff, Daniel F.; Eisen, Lars

    2012-01-01

    México has cities (e.g., México City and Puebla City) located at elevations > 2,000 m and above the elevation ceiling below which local climates allow the dengue virus mosquito vector Aedes aegypti to proliferate. Climate warming could raise this ceiling and place high-elevation cities at risk for dengue virus transmission. To assess the elevation ceiling for Ae. aegypti and determine the potential for using weather/climate parameters to predict mosquito abundance, we surveyed 12 communities along an elevation/climate gradient from Veracruz City (sea level) to Puebla City (∼2,100 m). Ae. aegypti was commonly encountered up to 1,700 m and present but rare from 1,700 to 2,130 m. This finding extends the known elevation range in México by > 300 m. Mosquito abundance was correlated with weather parameters, including temperature indices. Potential larval development sites were abundant in Puebla City and other high-elevation communities, suggesting that Ae. aegypti could proliferate should the climate become warmer. PMID:22987656

  5. Fine-scale mapping of vector habitats using very high resolution satellite imagery: a liver fluke case-study.

    Science.gov (United States)

    De Roeck, Els; Van Coillie, Frieke; De Wulf, Robert; Soenen, Karen; Charlier, Johannes; Vercruysse, Jozef; Hantson, Wouter; Ducheyne, Els; Hendrickx, Guy

    2014-12-01

    The visualization of vector occurrence in space and time is an important aspect of studying vector-borne diseases. Detailed maps of possible vector habitats provide valuable information for the prediction of infection risk zones but are currently lacking for most parts of the world. Nonetheless, monitoring vector habitats from the finest scales up to farm level is of key importance to refine currently existing broad-scale infection risk models. Using Fasciola hepatica, a parasite liver fluke, as a case in point, this study illustrates the potential of very high resolution (VHR) optical satellite imagery to efficiently and semi-automatically detect detailed vector habitats. A WorldView2 satellite image capable of transmitted by freshwater snails. The vector thrives in small water bodies (SWBs), such as ponds, ditches and other humid areas consisting of open water, aquatic vegetation and/or inundated grass. These water bodies can be as small as a few m2 and are most often not present on existing land cover maps because of their small size. We present a classification procedure based on object-based image analysis (OBIA) that proved valuable to detect SWBs at a fine scale in an operational and semi-automated way. The classification results were compared to field and other reference data such as existing broad-scale maps and expert knowledge. Overall, the SWB detection accuracy reached up to 87%. The resulting fine-scale SWB map can be used as input for spatial distribution modelling of the liver fluke snail vector to enable development of improved infection risk mapping and management advice adapted to specific, local farm situations.

  6. Two-Sample Tests for High-Dimensional Linear Regression with an Application to Detecting Interactions.

    Science.gov (United States)

    Xia, Yin; Cai, Tianxi; Cai, T Tony

    2018-01-01

    Motivated by applications in genomics, we consider in this paper global and multiple testing for the comparisons of two high-dimensional linear regression models. A procedure for testing the equality of the two regression vectors globally is proposed and shown to be particularly powerful against sparse alternatives. We then introduce a multiple testing procedure for identifying unequal coordinates while controlling the false discovery rate and false discovery proportion. Theoretical justifications are provided to guarantee the validity of the proposed tests and optimality results are established under sparsity assumptions on the regression coefficients. The proposed testing procedures are easy to implement. Numerical properties of the procedures are investigated through simulation and data analysis. The results show that the proposed tests maintain the desired error rates under the null and have good power under the alternative at moderate sample sizes. The procedures are applied to the Framingham Offspring study to investigate the interactions between smoking and cardiovascular related genetic mutations important for an inflammation marker.

  7. Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.

    Science.gov (United States)

    Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen

    2017-12-01

    In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.

  8. Radar target classification method with high accuracy and decision speed performance using MUSIC spectrum vectors and PCA projection

    Science.gov (United States)

    Secmen, Mustafa

    2011-10-01

    This paper introduces the performance of an electromagnetic target recognition method in resonance scattering region, which includes pseudo spectrum Multiple Signal Classification (MUSIC) algorithm and principal component analysis (PCA) technique. The aim of this method is to classify an "unknown" target as one of the "known" targets in an aspect-independent manner. The suggested method initially collects the late-time portion of noise-free time-scattered signals obtained from different reference aspect angles of known targets. Afterward, these signals are used to obtain MUSIC spectrums in real frequency domain having super-resolution ability and noise resistant feature. In the final step, PCA technique is applied to these spectrums in order to reduce dimensionality and obtain only one feature vector per known target. In the decision stage, noise-free or noisy scattered signal of an unknown (test) target from an unknown aspect angle is initially obtained. Subsequently, MUSIC algorithm is processed for this test signal and resulting test vector is compared with feature vectors of known targets one by one. Finally, the highest correlation gives the type of test target. The method is applied to wire models of airplane targets, and it is shown that it can tolerate considerable noise levels although it has a few different reference aspect angles. Besides, the runtime of the method for a test target is sufficiently low, which makes the method suitable for real-time applications.

  9. Design of a mixer for the thrust-vectoring system on the high-alpha research vehicle

    Science.gov (United States)

    Pahle, Joseph W.; Bundick, W. Thomas; Yeager, Jessie C.; Beissner, Fred L., Jr.

    1996-01-01

    One of the advanced control concepts being investigated on the High-Alpha Research Vehicle (HARV) is multi-axis thrust vectoring using an experimental thrust-vectoring (TV) system consisting of three hydraulically actuated vanes per engine. A mixer is used to translate the pitch-, roll-, and yaw-TV commands into the appropriate TV-vane commands for distribution to the vane actuators. A computer-aided optimization process was developed to perform the inversion of the thrust-vectoring effectiveness data for use by the mixer in performing this command translation. Using this process a new mixer was designed for the HARV and evaluated in simulation and flight. An important element of the Mixer is the priority logic, which determines priority among the pitch-, roll-, and yaw-TV commands.

  10. Highly predictive support vector machine (SVM) models for anthrax toxin lethal factor (LF) inhibitors.

    Science.gov (United States)

    Zhang, Xia; Amin, Elizabeth Ambrose

    2016-01-01

    Anthrax is a highly lethal, acute infectious disease caused by the rod-shaped, Gram-positive bacterium Bacillus anthracis. The anthrax toxin lethal factor (LF), a zinc metalloprotease secreted by the bacilli, plays a key role in anthrax pathogenesis and is chiefly responsible for anthrax-related toxemia and host death, partly via inactivation of mitogen-activated protein kinase kinase (MAPKK) enzymes and consequent disruption of key cellular signaling pathways. Antibiotics such as fluoroquinolones are capable of clearing the bacilli but have no effect on LF-mediated toxemia; LF itself therefore remains the preferred target for toxin inactivation. However, currently no LF inhibitor is available on the market as a therapeutic, partly due to the insufficiency of existing LF inhibitor scaffolds in terms of efficacy, selectivity, and toxicity. In the current work, we present novel support vector machine (SVM) models with high prediction accuracy that are designed to rapidly identify potential novel, structurally diverse LF inhibitor chemical matter from compound libraries. These SVM models were trained and validated using 508 compounds with published LF biological activity data and 847 inactive compounds deposited in the Pub Chem BioAssay database. One model, M1, demonstrated particularly favorable selectivity toward highly active compounds by correctly predicting 39 (95.12%) out of 41 nanomolar-level LF inhibitors, 46 (93.88%) out of 49 inactives, and 844 (99.65%) out of 847 Pub Chem inactives in external, unbiased test sets. These models are expected to facilitate the prediction of LF inhibitory activity for existing molecules, as well as identification of novel potential LF inhibitors from large datasets. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Quality and efficiency in high dimensional Nearest neighbor search

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2009-01-01

    Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.

  12. Three-dimensional laparoscopy vs 2-dimensional laparoscopy with high-definition technology for abdominal surgery

    DEFF Research Database (Denmark)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian

    2017-01-01

    BACKGROUND: This systematic review investigates newer generation 3-dimensional (3D) laparoscopy vs 2-dimensional (2D) laparoscopy in terms of error rating, performance time, and subjective assessment as early comparisons have shown contradictory results due to technological shortcomings. DATA...... Central Register of Controlled Trials database. CONCLUSIONS: Of 643 articles, 13 RCTs were included, of which 2 were clinical trials. Nine of 13 trials (69%) and 10 of 13 trials (77%) found a significant reduction in performance time and error, respectively, with the use of 3D-laparoscopy. Overall, 3D......-laparoscopy was found to be superior or equal to 2D-laparoscopy. All trials featuring subjective evaluation found a superiority of 3D-laparoscopy. More clinical RCTs are still awaited for the convincing results to be reproduced....

  13. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    Science.gov (United States)

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  14. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  15. High frame rate synthetic aperture vector flow imaging for transthoracic echocardiography

    Science.gov (United States)

    Villagómez-Hoyos, Carlos A.; Stuart, Matthias B.; Bechsgaard, Thor; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-04-01

    This work presents the first in vivo results of 2-D high frame rate vector velocity imaging for transthoracic cardiac imaging. Measurements are made on a healthy volunteer using the SARUS experimental ultrasound scanner connected to an intercostal phased-array probe. Two parasternal long-axis view (PLAX) are obtained, one centred at the aortic valve and another centred at the left ventricle. The acquisition sequence was composed of 3 diverging waves for high frame rate synthetic aperture flow imaging. For verification a phantom measurement is performed on a transverse straight 5 mm diameter vessel at a depth of 100 mm in a tissue-mimicking phantom. A flow pump produced a 2 ml/s constant flow with a peak velocity of 0.2 m/s. The average estimated flow angle in the ROI was 86.22° +/- 6.66° with a true flow angle of 90°. A relative velocity bias of -39% with a standard deviation of 13% was found. In-vivo acquisitions show complex flow patterns in the heart. In the aortic valve view, blood is seen exiting the left ventricle cavity through the aortic valve into the aorta during the systolic phase of the cardiac cycle. In the left ventricle view, blood flow is seen entering the left ventricle cavity through the mitral valve and splitting in two ways when approximating the left ventricle wall. The work presents 2-D velocity estimates on the heart from a non-invasive transthoracic scan. The ability of the method detecting flow regardless of the beam angle could potentially reveal a more complete view of the flow patterns presented on the heart.

  16. Exploiting the behaviour of wild malaria vectors to achieve high infection with fungal biocontrol agents

    Science.gov (United States)

    2012-01-01

    Background Control of mosquitoes that transmit malaria has been the mainstay in the fight against the disease, but alternative methods are required in view of emerging insecticide resistance. Entomopathogenic fungi are candidate alternatives, but to date, few trials have translated the use of these agents to field-based evaluations of their actual impact on mosquito survival and malaria risk. Mineral oil-formulations of the entomopathogenic fungi Metarhizium anisopliae and Beauveria bassiana were applied using five different techniques that each exploited the behaviour of malaria mosquitoes when entering, host-seeking or resting in experimental huts in a malaria endemic area of rural Tanzania. Results Survival of mosquitoes was reduced by 39-57% relative to controls after forcing upward house-entry of mosquitoes through fungus treated baffles attached to the eaves or after application of fungus-treated surfaces around an occupied bed net (bed net strip design). Moreover, 68 to 76% of the treatment mosquitoes showed fungal growth and thus had sufficient contact with fungus treated surfaces. A population dynamic model of malaria-mosquito interactions shows that these infection rates reduce malaria transmission by 75-80% due to the effect of fungal infection on adult mortality alone. The model also demonstrated that even if a high proportion of the mosquitoes exhibits outdoor biting behaviour, malaria transmission was still significantly reduced. Conclusions Entomopathogenic fungi strongly affect mosquito survival and have a high predicted impact on malaria transmission. These entomopathogens represent a viable alternative for malaria control, especially if they are used as part of an integrated vector management strategy. PMID:22449130

  17. Dimensionality analysis of multiparticle production at high energies

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1989-01-01

    An algorithm of analysis of multiparticle final states is offered. By the Renyi dimensionalities, which were calculated according to experimental data, though it were hadron distribution over the rapidity intervals or particle distribution in an N-dimensional momentum space, we can judge about the degree of correlation of particles, separate the momentum space projections and areas where the probability measure singularities are observed. The method is tested in a series of calculations with samples of fractal object points and with samples obtained by means of different generators of pseudo- and quasi-random numbers. 27 refs.; 11 figs

  18. Problems of high temperature superconductivity in three-dimensional systems

    Energy Technology Data Exchange (ETDEWEB)

    Geilikman, B T

    1973-01-01

    A review is given of more recent papers on this subject. These papers have dealt mainly with two-dimensional systems. The present paper extends the treatment to three-dimensional systems, under the following headings: systems with collective electrons of one group and localized electrons of another group (compounds of metals with non-metals-dielectrics, organic substances, undoped semiconductors, molecular crystals); experimental investigations of superconducting compounds of metals with organic compounds, dielectrics, semiconductors, and semi-metals; and systems with two or more groups of collective electrons. Mechanics are considered and models are derived. 86 references.

  19. Design and optimization of stress centralized MEMS vector hydrophone with high sensitivity at low frequency

    Science.gov (United States)

    Zhang, Guojun; Ding, Junwen; Xu, Wei; Liu, Yuan; Wang, Renxin; Han, Janjun; Bai, Bing; Xue, Chenyang; Liu, Jun; Zhang, Wendong

    2018-05-01

    A micro hydrophone based on piezoresistive effect, "MEMS vector hydrophone" was developed for acoustic detection application. To improve the sensitivity of MEMS vector hydrophone at low frequency, we reported a stress centralized MEMS vector hydrophone (SCVH) mainly used in 20-500 Hz. Stress concentration area was actualized in sensitive unit of hydrophone by silicon micromachining technology. Then piezoresistors were placed in stress concentration area for better mechanical response, thereby obtaining higher sensitivity. Static analysis was done to compare the mechanical response of three different sensitive microstructure: SCVH, conventional micro-silicon four-beam vector hydrophone (CFVH) and Lollipop-shaped vector hydrophone (LVH) respectively. And fluid-structure interaction (FSI) was used to analyze the natural frequency of SCVH for ensuring the measurable bandwidth. Eventually, the calibration experiment in standing wave field was done to test the property of SCVH and verify the accuracy of simulation. The results show that the sensitivity of SCVH has nearly increased by 17.2 dB in contrast to CFVH and 7.6 dB in contrast to LVH during 20-500 Hz.

  20. Visualizing vector field topology in fluid flows

    Science.gov (United States)

    Helman, James L.; Hesselink, Lambertus

    1991-01-01

    Methods of automating the analysis and display of vector field topology in general and flow topology in particular are discussed. Two-dimensional vector field topology is reviewed as the basis for the examination of topology in three-dimensional separated flows. The use of tangent surfaces and clipping in visualizing vector field topology in fluid flows is addressed.

  1. VEST: Abstract vector calculus simplification in Mathematica

    Science.gov (United States)

    Squire, J.; Burby, J.; Qin, H.

    2014-01-01

    We present a new package, VEST (Vector Einstein Summation Tools), that performs abstract vector calculus computations in Mathematica. Through the use of index notation, VEST is able to reduce three-dimensional scalar and vector expressions of a very general type to a well defined standard form. In addition, utilizing properties of the Levi-Civita symbol, the program can derive types of multi-term vector identities that are not recognized by reduction, subsequently applying these to simplify large expressions. In a companion paper Burby et al. (2013) [12], we employ VEST in the automation of the calculation of high-order Lagrangians for the single particle guiding center system in plasma physics, a computation which illustrates its ability to handle very large expressions. VEST has been designed to be simple and intuitive to use, both for basic checking of work and more involved computations.

  2. Vector model for polarized second-harmonic generation microscopy under high numerical aperture

    International Nuclear Information System (INIS)

    Wang, Xiang-Hui; Chang, Sheng-Jiang; Lin, Lie; Wang, Lin-Rui; Huo, Bing-Zhong; Hao, Shu-Jian

    2010-01-01

    Based on the vector diffraction theory and the generalized Jones matrix formalism, a vector model for polarized second-harmonic generation (SHG) microscopy is developed, which includes the roles of the axial component P z , the weight factor and the cross-effect between the lateral components. The numerical results show that as the relative magnitude of P z increases, the polarization response of the second-harmonic signal will vary from linear polarization to elliptical polarization and the polarization orientation of the second-harmonic signal is different from that under the paraxial approximation. In addition, it is interesting that the polarization response of the detected second-harmonic signal can change with the value of the collimator lens NA. Therefore, it is more advantageous to adopt the vector model to investigate the property of polarized SHG microscopy for a variety of cases

  3. High Frame-Rate Blood Vector Velocity Imaging Using Plane Waves: Simulations and Preliminary Experiments

    DEFF Research Database (Denmark)

    Udesen, Jesper; Gran, Fredrik; Hansen, Kristoffer Lindskov

    2008-01-01

    ) The ultrasound is not focused during the transmissions of the ultrasound signals; 2) A 13-bit Barker code is transmitted simultaneously from each transducer element; and 3) The 2-D vector velocity of the blood is estimated using 2-D cross-correlation. A parameter study was performed using the Field II program......, and performance of the method was investigated when a virtual blood vessel was scanned by a linear array transducer. An improved parameter set for the method was identified from the parameter study, and a flow rig measurement was performed using the same improved setup as in the simulations. Finally, the common...... carotid artery of a healthy male was scanned with a scan sequence that satisfies the limits set by the Food and Drug Administration. Vector velocity images were obtained with a frame-rate of 100 Hz where 40 speckle images are used for each vector velocity image. It was found that the blood flow...

  4. Matrix correlations for high-dimensional data: The modified RV-coefficient

    NARCIS (Netherlands)

    Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van

    2009-01-01

    Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they

  5. An episomal vector-based CRISPR/Cas9 system for highly efficient gene knockout in human pluripotent stem cells.

    Science.gov (United States)

    Xie, Yifang; Wang, Daqi; Lan, Feng; Wei, Gang; Ni, Ting; Chai, Renjie; Liu, Dong; Hu, Shijun; Li, Mingqing; Li, Dajin; Wang, Hongyan; Wang, Yongming

    2017-05-24

    Human pluripotent stem cells (hPSCs) represent a unique opportunity for understanding the molecular mechanisms underlying complex traits and diseases. CRISPR/Cas9 is a powerful tool to introduce genetic mutations into the hPSCs for loss-of-function studies. Here, we developed an episomal vector-based CRISPR/Cas9 system, which we called epiCRISPR, for highly efficient gene knockout in hPSCs. The epiCRISPR system enables generation of up to 100% Insertion/Deletion (indel) rates. In addition, the epiCRISPR system enables efficient double-gene knockout and genomic deletion. To minimize off-target cleavage, we combined the episomal vector technology with double-nicking strategy and recent developed high fidelity Cas9. Thus the epiCRISPR system offers a highly efficient platform for genetic analysis in hPSCs.

  6. High-efficiency and flexible generation of vector vortex optical fields by a reflective phase-only spatial light modulator.

    Science.gov (United States)

    Cai, Meng-Qiang; Wang, Zhou-Xiang; Liang, Juan; Wang, Yan-Kun; Gao, Xu-Zhen; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

    2017-08-01

    The scheme for generating vector optical fields should have not only high efficiency but also flexibility for satisfying the requirements of various applications. However, in general, high efficiency and flexibility are not compatible. Here we present and experimentally demonstrate a solution to directly, flexibly, and efficiently generate vector vortex optical fields (VVOFs) with a reflective phase-only liquid crystal spatial light modulator (LC-SLM) based on optical birefringence of liquid crystal molecules. To generate the VVOFs, this approach needs in principle only a half-wave plate, an LC-SLM, and a quarter-wave plate. This approach has some advantages, including a simple experimental setup, good flexibility, and high efficiency, making the approach very promising in some applications when higher power is need. This approach has a generation efficiency of 44.0%, which is much higher than the 1.1% of the common path interferometric approach.

  7. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    Science.gov (United States)

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM

  8. Dimensional consistency achieved in high-performance synchronizing hubs

    International Nuclear Information System (INIS)

    Garcia, P.; Campos, M.; Torralba, M.

    2013-01-01

    The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via ( 2 P2S ) has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and process. (Author) 21 refs.

  9. "Lollipop-shaped" high-sensitivity Microelectromechanical Systems vector hydrophone based on Parylene encapsulation

    Science.gov (United States)

    Liu, Yuan; Wang, Renxin; Zhang, Guojun; Du, Jin; Zhao, Long; Xue, Chenyang; Zhang, Wendong; Liu, Jun

    2015-07-01

    This paper presents methods of promoting the sensitivity of Microelectromechanical Systems (MEMS) vector hydrophone by increasing the sensing area of cilium and perfect insulative Parylene membrane. First, a low-density sphere is integrated with the cilium to compose a "lollipop shape," which can considerably increase the sensing area. A mathematic model on the sensitivity of the "lollipop-shaped" MEMS vector hydrophone is presented, and the influences of different structural parameters on the sensitivity are analyzed via simulation. Second, the MEMS vector hydrophone is encapsulated through the conformal deposition of insulative Parylene membrane, which enables underwater acoustic monitoring without any typed sound-transparent encapsulation. Finally, the characterization results demonstrate that the sensitivity reaches up to -183 dB (500 Hz 0dB at 1 V/ μPa ), which is increased by more than 10 dB, comparing with the previous cilium-shaped MEMS vector hydrophone. Besides, the frequency response takes on a sensitivity increment of 6 dB per octave. The working frequency band is 20-500 Hz and the concave point depth of 8-shaped directivity is beyond 30 dB, indicating that the hydrophone is promising in underwater acoustic application.

  10. High stability vector-based direct power control for DFIG-based wind turbine

    DEFF Research Database (Denmark)

    Zhu, Rongwu; Chen, Zhe; Wu, Xiaojie

    2015-01-01

    This paper proposes an improved vector-based direct power control (DPC) strategy for the doubly-fed induction generator (DFIG)-based wind energy conversion system. Based on the small signal model, the proposed DPC improves the stability of the DFIG, and avoids the DFIG operating in the marginal...

  11. Two-dimensional impurity transport calculations for a high recycling divertor

    International Nuclear Information System (INIS)

    Brooks, J.N.

    1986-04-01

    Two dimensional analysis of impurity transport in a high recycling divertor shows asymmetric particle fluxes to the divertor plate, low helium pumping efficiency, and high scrapeoff zone shielding for sputtered impurities

  12. Dimensional consistency achieved in high-performance synchronizing hubs

    Directory of Open Access Journals (Sweden)

    García, P.

    2013-02-01

    Full Text Available The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via (“2P2S” has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and process.

    Las tolerancias en componentes fabricados para la industria del automóvil son tan estrechas que cualquier modificación en las variables del proceso puede provocar que no se cumplan. Una disminución de las tolerancias dimensionales, puede significar una mejora en las propiedades de las piezas. Dependiendo de los requerimientos dimensionales y del material, distintas rutas de procesado pueden seguirse para encontrar un método de procesado robusto, que minimice costes y maximice la capacidad del proceso. En los últimos años, la tolerancia dimensional se ha ajustado gracias a métodos de procesado como el doble prensado/doble sinterizado (“2P2S”, método de gran precisión para conseguir estrechas tolerancias. En este trabajo, se muestra que los parámetros de procesado

  13. Sums and Gaussian vectors

    CERN Document Server

    Yurinsky, Vadim Vladimirovich

    1995-01-01

    Surveys the methods currently applied to study sums of infinite-dimensional independent random vectors in situations where their distributions resemble Gaussian laws. Covers probabilities of large deviations, Chebyshev-type inequalities for seminorms of sums, a method of constructing Edgeworth-type expansions, estimates of characteristic functions for random vectors obtained by smooth mappings of infinite-dimensional sums to Euclidean spaces. A self-contained exposition of the modern research apparatus around CLT, the book is accessible to new graduate students, and can be a useful reference for researchers and teachers of the subject.

  14. Next generation of adeno-associated virus 2 vectors: Point mutations in tyrosines lead to high-efficiency transduction at lower doses

    Science.gov (United States)

    Zhong, Li; Li, Baozheng; Mah, Cathryn S.; Govindasamy, Lakshmanan; Agbandje-McKenna, Mavis; Cooper, Mario; Herzog, Roland W.; Zolotukhin, Irene; Warrington, Kenneth H.; Weigel-Van Aken, Kirsten A.; Hobbs, Jacqueline A.; Zolotukhin, Sergei; Muzyczka, Nicholas; Srivastava, Arun

    2008-01-01

    Recombinant adeno-associated virus 2 (AAV2) vectors are in use in several Phase I/II clinical trials, but relatively large vector doses are needed to achieve therapeutic benefits. Large vector doses also trigger an immune response as a significant fraction of the vectors fails to traffic efficiently to the nucleus and is targeted for degradation by the host cell proteasome machinery. We have reported that epidermal growth factor receptor protein tyrosine kinase (EGFR-PTK) signaling negatively affects transduction by AAV2 vectors by impairing nuclear transport of the vectors. We have also observed that EGFR-PTK can phosphorylate AAV2 capsids at tyrosine residues. Tyrosine-phosphorylated AAV2 vectors enter cells efficiently but fail to transduce effectively, in part because of ubiquitination of AAV capsids followed by proteasome-mediated degradation. We reasoned that mutations of the surface-exposed tyrosine residues might allow the vectors to evade phosphorylation and subsequent ubiquitination and, thus, prevent proteasome-mediated degradation. Here, we document that site-directed mutagenesis of surface-exposed tyrosine residues leads to production of vectors that transduce HeLa cells ≈10-fold more efficiently in vitro and murine hepatocytes nearly 30-fold more efficiently in vivo at a log lower vector dose. Therapeutic levels of human Factor IX (F.IX) are also produced at an ≈10-fold reduced vector dose. The increased transduction efficiency of tyrosine-mutant vectors is due to lack of capsid ubiquitination and improved intracellular trafficking to the nucleus. These studies have led to the development of AAV vectors that are capable of high-efficiency transduction at lower doses, which has important implications in their use in human gene therapy. PMID:18511559

  15. Unit cell determination of epitaxial thin films based on reciprocal space vectors by high-resolution X-ray diffractometry

    OpenAIRE

    Yang, Ping; Liu, Huajun; Chen, Zuhuang; Chen, Lang; Wang, John

    2013-01-01

    A new approach, based on reciprocal space vectors (RSVs), is developed to determine Bravais lattice types and accurate lattice parameters of epitaxial thin films by high-resolution X-ray diffractometry (HR-XRD). The lattice parameters of single crystal substrates are employed as references to correct the systematic experimental errors of RSVs of thin films. The general procedure is summarized, involving correction of RSVs, derivation of raw unit cell, subsequent conversion to the Niggli unit ...

  16. Interface between path and orbital angular momentum entanglement for high-dimensional photonic quantum information.

    Science.gov (United States)

    Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton

    2014-07-30

    Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.

  17. Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids

    NARCIS (Netherlands)

    bin Zubair, H.; Oosterlee, C.E.; Wienands, R.

    2006-01-01

    This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We

  18. High entomological inoculation rate of malaria vectors in area of high coverage of interventions in southwest Ethiopia: Implication for residual malaria transmission

    Directory of Open Access Journals (Sweden)

    Misrak Abraham

    2017-05-01

    Finally, there was an indoor residual malaria transmission in a village of high coverage of bed nets and where the principal malaria vector is susceptibility to propoxur and bendiocarb; insecticides currently in use for indoor residual spraying. The continuing indoor transmission of malaria in such village implies the need for new tools to supplement the existing interventions and to reduce indoor malaria transmission.

  19. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    Science.gov (United States)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  20. Rapid transient production in plants by replicating and non-replicating vectors yields high quality functional anti-HIV antibody.

    Directory of Open Access Journals (Sweden)

    Frank Sainsbury

    2010-11-01

    Full Text Available The capacity of plants and plant cells to produce large amounts of recombinant protein has been well established. Due to advantages in terms of speed and yield, attention has recently turned towards the use of transient expression systems, including viral vectors, to produce proteins of pharmaceutical interest in plants. However, the effects of such high level expression from viral vectors and concomitant effects on host cells may affect the quality of the recombinant product.To assess the quality of antibodies transiently expressed to high levels in plants, we have expressed and characterised the human anti-HIV monoclonal antibody, 2G12, using both replicating and non-replicating systems based on deleted versions of Cowpea mosaic virus (CPMV RNA-2. The highest yield (approximately 100 mg/kg wet weight leaf tissue of affinity purified 2G12 was obtained when the non-replicating CPMV-HT system was used and the antibody was retained in the endoplasmic reticulum (ER. Glycan analysis by mass-spectrometry showed that the glycosylation pattern was determined exclusively by whether the antibody was retained in the ER and did not depend on whether a replicating or non-replicating system was used. Characterisation of the binding and neutralisation properties of all the purified 2G12 variants from plants showed that these were generally similar to those of the Chinese hamster ovary (CHO cell-produced 2G12.Overall, the results demonstrate that replicating and non-replicating CPMV-based vectors are able to direct the production of a recombinant IgG similar in activity to the CHO-produced control. Thus, a complex recombinant protein was produced with no apparent effect on its biochemical properties using either high-level expression or viral replication. The speed with which a recombinant pharmaceutical with excellent biochemical characteristics can be produced transiently in plants makes CPMV-based expression vectors an attractive option for

  1. On vector fields having properties of Reeb fields

    OpenAIRE

    Hajduk, Boguslaw; Walczak, Rafal

    2011-01-01

    We study constructions of vector fields with properties which are characteristic to Reeb vector fields of contact forms. In particular, we prove that all closed oriented odd-dimensional manifold have geodesible vector fields.

  2. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  3. Two-dimensional computer simulation of high intensity proton beams

    CERN Document Server

    Lapostolle, Pierre M

    1972-01-01

    A computer program has been developed which simulates the two- dimensional transverse behaviour of a proton beam in a focusing channel. The model is represented by an assembly of a few thousand 'superparticles' acted upon by their own self-consistent electric field and an external focusing force. The evolution of the system is computed stepwise in time by successively solving Poisson's equation and Newton's law of motion. Fast Fourier transform techniques are used for speed in the solution of Poisson's equation, while extensive area weighting is utilized for the accurate evaluation of electric field components. A computer experiment has been performed on the CERN CDC 6600 computer to study the nonlinear behaviour of an intense beam in phase space, showing under certain circumstances a filamentation due to space charge and an apparent emittance growth. (14 refs).

  4. Vectors and Rotations in 3-Dimensions: Vector Algebra for the C++ Programmer

    Science.gov (United States)

    2016-12-01

    release; distribution is unlimited. 1. Introduction This report describes 2 C++ classes: a Vector class for performing vector algebra in 3-dimensional...ARL-TR-7894•DEC 2016 US Army Research Laboratory Vectors and Rotations in 3-Dimensions:Vector Algebra for the C++ Programmer by Richard Saucier...Army Research Laboratory Vectors and Rotations in 3-Dimensions:Vector Algebra for the C++ Programmer by Richard Saucier Survivability/Lethality

  5. Vector analysis

    CERN Document Server

    Newell, Homer E

    2006-01-01

    When employed with skill and understanding, vector analysis can be a practical and powerful tool. This text develops the algebra and calculus of vectors in a manner useful to physicists and engineers. Numerous exercises (with answers) not only provide practice in manipulation but also help establish students' physical and geometric intuition in regard to vectors and vector concepts.Part I, the basic portion of the text, consists of a thorough treatment of vector algebra and the vector calculus. Part II presents the illustrative matter, demonstrating applications to kinematics, mechanics, and e

  6. About vectors

    CERN Document Server

    Hoffmann, Banesh

    1975-01-01

    From his unusual beginning in ""Defining a vector"" to his final comments on ""What then is a vector?"" author Banesh Hoffmann has written a book that is provocative and unconventional. In his emphasis on the unresolved issue of defining a vector, Hoffmann mixes pure and applied mathematics without using calculus. The result is a treatment that can serve as a supplement and corrective to textbooks, as well as collateral reading in all courses that deal with vectors. Major topics include vectors and the parallelogram law; algebraic notation and basic ideas; vector algebra; scalars and scalar p

  7. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    OpenAIRE

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data)...

  8. Mitigating the Insider Threat Using High-Dimensional Search and Modeling

    National Research Council Canada - National Science Library

    Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago

    2006-01-01

    In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...

  9. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)

    2015-01-15

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  10. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data

  11. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    Science.gov (United States)

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  12. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2010-01-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii

  13. Distribution of high-dimensional entanglement via an intra-city free-space link.

    Science.gov (United States)

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-07-24

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.

  14. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  15. Estimation of vector velocity

    DEFF Research Database (Denmark)

    2000-01-01

    Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...

  16. Two dimensional simulation of high power laser-surface interaction

    International Nuclear Information System (INIS)

    Goldman, S.R.; Wilke, M.D.; Green, R.E.L.; Johnson, R.P.; Busch, G.E.

    1998-01-01

    For laser intensities in the range of 10 8 --10 9 W/cm 2 , and pulse lengths of order 10 microsec or longer, the authors have modified the inertial confinement fusion code Lasnex to simulate gaseous and some dense material aspects of the laser-matter interaction. The unique aspect of their treatment consists of an ablation model which defines a dense material-vapor interface and then calculates the mass flow across this interface. The model treats the dense material as a rigid two-dimensional mass and heat reservoir suppressing all hydrodynamic motion in the dense material. The computer simulations and additional post-processors provide predictions for measurements including impulse given to the target, pressures at the target interface, electron temperatures and densities in the vapor-plasma plume region, and emission of radiation from the target. The authors will present an analysis of some relatively well diagnosed experiments which have been useful in developing their modeling. The simulations match experimentally obtained target impulses, pressures at the target surface inside the laser spot, and radiation emission from the target to within about 20%. Hence their simulational technique appears to form a useful basis for further investigation of laser-surface interaction in this intensity, pulse-width range. This work is useful in many technical areas such as materials processing

  17. Symmetric vectors and algebraic classification

    International Nuclear Information System (INIS)

    Leibowitz, E.

    1980-01-01

    The concept of symmetric vector field in Riemannian manifolds, which arises in the study of relativistic cosmological models, is analyzed. Symmetric vectors are tied up with the algebraic properties of the manifold curvature. A procedure for generating a congruence of symmetric fields out of a given pair is outlined. The case of a three-dimensional manifold of constant curvature (''isotropic universe'') is studied in detail, with all its symmetric vector fields being explicitly constructed

  18. High-dimensional orbital angular momentum entanglement concentration based on Laguerre–Gaussian mode selection

    International Nuclear Information System (INIS)

    Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang

    2013-01-01

    Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)

  19. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    Science.gov (United States)

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  20. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Hongchao Song

    2017-01-01

    Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  1. Two-Dimensional High Definition Versus Three-Dimensional Endoscopy in Endonasal Skull Base Surgery: A Comparative Preclinical Study.

    Science.gov (United States)

    Rampinelli, Vittorio; Doglietto, Francesco; Mattavelli, Davide; Qiu, Jimmy; Raffetti, Elena; Schreiber, Alberto; Villaret, Andrea Bolzoni; Kucharczyk, Walter; Donato, Francesco; Fontanella, Marco Maria; Nicolai, Piero

    2017-09-01

    Three-dimensional (3D) endoscopy has been recently introduced in endonasal skull base surgery. Only a relatively limited number of studies have compared it to 2-dimensional, high definition technology. The objective was to compare, in a preclinical setting for endonasal endoscopic surgery, the surgical maneuverability of 2-dimensional, high definition and 3D endoscopy. A group of 68 volunteers, novice and experienced surgeons, were asked to perform 2 tasks, namely simulating grasping and dissection surgical maneuvers, in a model of the nasal cavities. Time to complete the tasks was recorded. A questionnaire to investigate subjective feelings during tasks was filled by each participant. In 25 subjects, the surgeons' movements were continuously tracked by a magnetic-based neuronavigator coupled with dedicated software (ApproachViewer, part of GTx-UHN) and the recorded trajectories were analyzed by comparing jitter, sum of square differences, and funnel index. Total execution time was significantly lower with 3D technology (P < 0.05) in beginners and experts. Questionnaires showed that beginners preferred 3D endoscopy more frequently than experts. A minority (14%) of beginners experienced discomfort with 3D endoscopy. Analysis of jitter showed a trend toward increased effectiveness of surgical maneuvers with 3D endoscopy. Sum of square differences and funnel index analyses documented better values with 3D endoscopy in experts. In a preclinical setting for endonasal skull base surgery, 3D technology appears to confer an advantage in terms of time of execution and precision of surgical maneuvers. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Elementary vectors

    CERN Document Server

    Wolstenholme, E Œ

    1978-01-01

    Elementary Vectors, Third Edition serves as an introductory course in vector analysis and is intended to present the theoretical and application aspects of vectors. The book covers topics that rigorously explain and provide definitions, principles, equations, and methods in vector analysis. Applications of vector methods to simple kinematical and dynamical problems; central forces and orbits; and solutions to geometrical problems are discussed as well. This edition of the text also provides an appendix, intended for students, which the author hopes to bridge the gap between theory and appl

  3. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  4. High-titer recombinant adeno-associated virus production utilizing a recombinant herpes simplex virus type I vector expressing AAV-2 Rep and Cap.

    Science.gov (United States)

    Conway, J E; Rhys, C M; Zolotukhin, I; Zolotukhin, S; Muzyczka, N; Hayward, G S; Byrne, B J

    1999-06-01

    Recombinant adeno-associated virus type 2 (rAAV) vectors have recently been used to achieve long-term, high level transduction in vivo. Further development of rAAV vectors for clinical use requires significant technological improvements in large-scale vector production. In order to facilitate the production of rAAV vectors, a recombinant herpes simplex virus type I vector (rHSV-1) which does not produce ICP27, has been engineered to express the AAV-2 rep and cap genes. The optimal dose of this vector, d27.1-rc, for AAV production has been determined and results in a yield of 380 expression units (EU) of AAV-GFP produced from 293 cells following transfection with AAV-GFP plasmid DNA. In addition, d27.1-rc was also efficient at producing rAAV from cell lines that have an integrated AAV-GFP provirus. Up to 480 EU/cell of AAV-GFP could be produced from the cell line GFP-92, a proviral, 293 derived cell line. Effective amplification of rAAV vectors introduced into 293 cells by infection was also demonstrated. Passage of rAAV with d27. 1-rc results in up to 200-fold amplification of AAV-GFP with each passage after coinfection of the vectors. Efficient, large-scale production (>109 cells) of AAV-GFP from a proviral cell line was also achieved and these stocks were free of replication-competent AAV. The described rHSV-1 vector provides a novel, simple and flexible way to introduce the AAV-2 rep and cap genes and helper virus functions required to produce high-titer rAAV preparations from any rAAV proviral construct. The efficiency and potential for scalable delivery of d27.1-rc to producer cell cultures should facilitate the production of sufficient quantities of rAAV vectors for clinical application.

  5. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    Science.gov (United States)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  6. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  7. Nanosatellite High-Precision Magnetic Missions Enabled by Advances in a Stand-Alone Scalar/Vector Absolute Magnetometer

    Science.gov (United States)

    Hulot, G.; Leger, J. M.; Vigneron, P.; Jager, T.; Bertrand, F.; Coisson, P.; Deram, P.; Boness, A.; Tomasini, L.; Faure, B.

    2017-12-01

    Satellites of the ESA Swarm mission currently in operation carry a new generation of Absolute Scalar Magnetometers (ASM), which nominally deliver 1 Hz scalar for calibrating the relative flux gate magnetometers that complete the magnetometry payload (together with star cameras, STR, for attitude restitution) and providing extremely accurate scalar measurements of the magnetic field for science investigations. These ASM instruments, however, can also operate in two additional modes, a high-frequency 250 Hz scalar mode and a 1 Hz absolute dual-purpose scalar/vector mode. The 250 Hz scalar mode already allowed the detection of until now very poorly documented extremely low frequency whistler signals produced by lightning in the atmosphere, while the 1 Hz scalar/vector mode has provided data that, combined with attitude restitution from the STR, could be used to produce scientifically relevant core field and lithospheric field models. Both ASM modes have thus now been fully validated for science applications. Efforts towards developing an improved and miniaturized version of this instrument is now well under way with CNES support in the context of the preparation of a 12U nanosatellite mission (NanoMagSat) proposed to be launched to complement the Swarm satellite constellation. This advanced miniaturized ASM could potentially operate in an even more useful mode, simultaneously providing high frequency (possibly beyond 500 Hz) absolute scalar data and self-calibrated 1 Hz vector data, thus providing scientifically valuable data for multiple science applications. In this presentation, we will illustrate the science such an instrument taken on board a nanosatellite could enable, and report on the current status of the NanoMagSat project that intends to take advantage of it.

  8. HASE: Framework for efficient high-dimensional association analyses

    NARCIS (Netherlands)

    G.V. Roshchupkin (Gennady); H.H.H. Adams (Hieab); M.W. Vernooij (Meike); A. Hofman (Albert); C.M. van Duijn (Cornelia); M.K. Ikram (Kamran); W.J. Niessen (Wiro)

    2016-01-01

    textabstractHigh-throughput technology can now provide rich information on a person's biological makeup and environmental surroundings. Important discoveries have been made by relating these data to various health outcomes in fields such as genomics, proteomics, and medical imaging. However,

  9. HASE : Framework for efficient high-dimensional association analyses

    NARCIS (Netherlands)

    Roshchupkin, G. V.; Adams, H; Vernooij, Meike W.; Hofman, A; Van Duijn, C. M.; Ikram, M. Arfan; Niessen, W.J.

    2016-01-01

    High-throughput technology can now provide rich information on a person's biological makeup and environmental surroundings. Important discoveries have been made by relating these data to various health outcomes in fields such as genomics, proteomics, and medical imaging. However,

  10. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  11. Secure data storage by three-dimensional absorbers in highly scattering volume medium

    International Nuclear Information System (INIS)

    Matoba, Osamu; Matsuki, Shinichiro; Nitta, Kouichi

    2008-01-01

    A novel data storage in a volume medium with highly scattering coefficient is proposed for data security application. Three-dimensional absorbers are used as data. These absorbers can not be measured by interferometer when the scattering in a volume medium is strong enough. We present a method to reconstruct three-dimensional absorbers and present numerical results to show the effectiveness of the proposed data storage.

  12. High Speed Water Sterilization Using One-Dimensional Nanostructures

    KAUST Repository

    Schoen, David T.; Schoen, Alia P.; Hu, Liangbing; Kim, Han Sun; Heilshorn, Sarah C.; Cui, Yi

    2010-01-01

    The removal of bacteria and other organisms from water is an extremely important process, not only for drinking and sanitation but also industrially as biofouling is a commonplace and serious problem. We here present a textile based multiscale device for the high speed electrical sterilization of water using silver nanowires, carbon nanotubes, and cotton. This approach, which combines several materials spanning three very different length scales with simple dying based fabrication, makes a gravity fed device operating at 100000 L/(h m2) which can inactivate >98% of bacteria with only several seconds of total incubation time. This excellent performance is enabled by the use of an electrical mechanism rather than size exclusion, while the very high surface area of the device coupled with large electric field concentrations near the silver nanowire tips allows for effective bacterial inactivation. © 2010 American Chemical Society.

  13. High Speed Water Sterilization Using One-Dimensional Nanostructures

    KAUST Repository

    Schoen, David T.

    2010-09-08

    The removal of bacteria and other organisms from water is an extremely important process, not only for drinking and sanitation but also industrially as biofouling is a commonplace and serious problem. We here present a textile based multiscale device for the high speed electrical sterilization of water using silver nanowires, carbon nanotubes, and cotton. This approach, which combines several materials spanning three very different length scales with simple dying based fabrication, makes a gravity fed device operating at 100000 L/(h m2) which can inactivate >98% of bacteria with only several seconds of total incubation time. This excellent performance is enabled by the use of an electrical mechanism rather than size exclusion, while the very high surface area of the device coupled with large electric field concentrations near the silver nanowire tips allows for effective bacterial inactivation. © 2010 American Chemical Society.

  14. Investigation of Acoustic Vector Sensor Data Processing in the Presence of Highly Variable Bathymetry

    Science.gov (United States)

    2014-06-01

    shelf 10 region to the north of the canyon. The impact of this 3-dimensional (3D) variable bathymetry, which may be combined with the effects of...weaker arrivals at large negative angles, consistent with the earliest bottom reflections on the left. The impact of the bottom-path reflections from...nzout*(nrout+1)*ny))),’bof’); for ifr =1:64, for ir=1:nrout+1, for iy=1:ny, data=fread(fid3,2*nzout,’float32’); fwrite(fid,data

  15. One-dimensional model for QCD at high energy

    International Nuclear Information System (INIS)

    Iancu, E.; Santana Amaral, J.T. de; Soyez, G.; Triantafyllopoulos, D.N.

    2007-01-01

    We propose a stochastic particle model in (1+1) dimensions, with one dimension corresponding to rapidity and the other one to the transverse size of a dipole in QCD, which mimics high-energy evolution and scattering in QCD in the presence of both saturation and particle-number fluctuations, and hence of pomeron loops. The model evolves via non-linear particle splitting, with a non-local splitting rate which is constrained by boost-invariance and multiple scattering. The splitting rate saturates at high density, so like the gluon emission rate in the JIMWLK evolution. In the mean field approximation obtained by ignoring fluctuations, the model exhibits the hallmarks of the BK equation, namely a BFKL-like evolution at low density, the formation of a traveling wave, and geometric scaling. In the full evolution including fluctuations, the geometric scaling is washed out at high energy and replaced by diffusive scaling. It is likely that the model belongs to the universality class of the reaction-diffusion process. The analysis of the model sheds new light on the pomeron loops equations in QCD and their possible improvements

  16. Infinite-Dimensional Symmetry Algebras as a Help Toward Solutions of the Self-Dual Field Equations with One Killing Vector

    Science.gov (United States)

    Finley, Daniel; McIver, John K.

    2002-12-01

    The sDiff(2) Toda equation determines all self-dual, vacuum solutions of the Einstein field equations with one rotational Killing vector. Some history of the searches for non-trivial solutions is given, including those that begin with the limit as n → ∞ of the An Toda lattice equations. That approach is applied here to the known prolongation structure for the Toda lattice, hoping to use Bäcklund transformations to generate new solutions. Although this attempt has not yet succeeded, new faithful (tangent-vector) realizations of A∞ are described, and a direct approach via the continuum Lie algebras of Saveliev and Leznov is given.

  17. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    Science.gov (United States)

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at

  18. Engineering two-photon high-dimensional states through quantum interference

    Science.gov (United States)

    Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew

    2016-01-01

    Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685

  19. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  20. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  1. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  2. Study of Muon Pairs and Vector Mesons Produced in High Energy Pb-Pb Interactions

    CERN Multimedia

    Karavicheva, T; Atayan, M; Bordalo, P; Constans, N P; Gulkanyan, H; Kluberg, L

    2002-01-01

    %NA50 %title\\\\ \\\\The experiment studies dimuons produced in Pb-Pb and p-A collisions, at nucleon-nucleon c.m. energies of $ \\sqrt{s} $ = 18 and 30 GeV respectively. The setup accepts dimuons in a kinematical range roughly defined as $0.1$ $1 GeV/c$, and stands maximal luminosity (5~10$^{7}$~Pb ions and 10$^7$ interactions per burst). The physics includes signals which probe QGP (Quark-Gluon Plasma), namely the $\\phi$, J/$\\psi$ and $\\psi^\\prime$ vector mesons and thermal dimuons, and reference signals, namely the (unseparated) $\\rho$ and $\\omega$ mesons, and Drell-Yan dimuons. The experiment is a continuation, with improved means, of NA38, and expands its study of {\\it charmonium suppression} and {\\it strangeness enhancement}.\\\\ \\\\The muons are measured in the former NA10 spectrometer, which is shielded from the hot target region by a beam stopper and absorber wall. The muons traverse 5~m of BeO and C. The impact parameter is determined by a Zero Degree Calorimeter (Ta with silica fibres). Energy dissipation ...

  3. Metallic and highly conducting two-dimensional atomic arrays of sulfur enabled by molybdenum disulfide nanotemplate

    Science.gov (United States)

    Zhu, Shuze; Geng, Xiumei; Han, Yang; Benamara, Mourad; Chen, Liao; Li, Jingxiao; Bilgin, Ismail; Zhu, Hongli

    2017-10-01

    Element sulfur in nature is an insulating solid. While it has been tested that one-dimensional sulfur chain is metallic and conducting, the investigation on two-dimensional sulfur remains elusive. We report that molybdenum disulfide layers are able to serve as the nanotemplate to facilitate the formation of two-dimensional sulfur. Density functional theory calculations suggest that confined in-between layers of molybdenum disulfide, sulfur atoms are able to form two-dimensional triangular arrays that are highly metallic. As a result, these arrays contribute to the high conductivity and metallic phase of the hybrid structures of molybdenum disulfide layers and two-dimensional sulfur arrays. The experimentally measured conductivity of such hybrid structures reaches up to 223 S/m. Multiple experimental results, including X-ray photoelectron spectroscopy (XPS), transition electron microscope (TEM), selected area electron diffraction (SAED), agree with the computational insights. Due to the excellent conductivity, the current density is linearly proportional to the scan rate until 30,000 mV s-1 without the attendance of conductive additives. Using such hybrid structures as electrode, the two-electrode supercapacitor cells yield a power density of 106 Wh kg-1 and energy density 47.5 Wh kg-1 in ionic liquid electrolytes. Our findings offer new insights into using two-dimensional materials and their Van der Waals heterostructures as nanotemplates to pattern foreign atoms for unprecedented material properties.

  4. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  5. Vector analysis

    CERN Document Server

    Brand, Louis

    2006-01-01

    The use of vectors not only simplifies treatments of differential geometry, mechanics, hydrodynamics, and electrodynamics, but also makes mathematical and physical concepts more tangible and easy to grasp. This text for undergraduates was designed as a short introductory course to give students the tools of vector algebra and calculus, as well as a brief glimpse into these subjects' manifold applications. The applications are developed to the extent that the uses of the potential function, both scalar and vector, are fully illustrated. Moreover, the basic postulates of vector analysis are brou

  6. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-12-01

    Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  7. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    Science.gov (United States)

    Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2013-12-01

    Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  8. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  9. Bayesian Inference of High-Dimensional Dynamical Ocean Models

    Science.gov (United States)

    Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.

    2015-12-01

    This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.

  10. Global communication schemes for the numerical solution of high-dimensional PDEs

    DEFF Research Database (Denmark)

    Hupp, Philipp; Heene, Mario; Jacob, Riko

    2016-01-01

    The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...

  11. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  12. Model-based Clustering of High-Dimensional Data in Astrophysics

    Science.gov (United States)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  13. High-dimensional atom localization via spontaneously generated coherence in a microwave-driven atomic system.

    Science.gov (United States)

    Wang, Zhiping; Chen, Jinyu; Yu, Benli

    2017-02-20

    We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.

  14. A Vector Printing Method for High-Speed Electrohydrodynamic (EHD Jet Printing Based on Encoder Position Sensors

    Directory of Open Access Journals (Sweden)

    Thanh Huy Phung

    2018-02-01

    Full Text Available Electrohyrodynamic (EHD jet printing has been widely used in the field of direct micro-nano patterning applications, due to its high resolution printing capability. So far, vector line printing using a single nozzle has been widely used for most EHD printing applications. However, the application has been limited to low-speed printing, to avoid non-uniform line width near the end points where line printing starts and ends. At end points of line vector printing, the deposited drop amount is likely to be significantly large compared to the rest of the printed lines, due to unavoidable acceleration and deceleration. In this study, we proposed a method to solve the printing quality problems by producing droplets at an equally spaced distance, irrespective of the printing speed. For this purpose, an encoder processing unit (EPU was developed, so that the jetting trigger could be generated according to user-defined spacing by using encoder position signals, which are used for the positioning control of the two linear stages.

  15. High-dimensional quantum key distribution based on multicore fiber using silicon photonic integrated circuits

    DEFF Research Database (Denmark)

    Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld

    2017-01-01

    is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...

  16. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    International Nuclear Information System (INIS)

    Hayashi, Y.; Hirose, Y.; Seno, Y.

    2016-01-01

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 "3 voxels was obtained.

  17. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Y., E-mail: y-hayashi@mosk.tytlabs.co.jp; Hirose, Y.; Seno, Y. [Toyota Central R& D Toyota Central R& D Labs., Inc., 41-1 Nagakute Aichi 480-1192 Japan (Japan)

    2016-07-27

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 {sup 3} voxels was obtained.

  18. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    Directory of Open Access Journals (Sweden)

    Thenmozhi Srinivasan

    2015-01-01

    Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.

  19. Vector velocimeter

    DEFF Research Database (Denmark)

    2012-01-01

    The present invention relates to a compact, reliable and low-cost vector velocimeter for example for determining velocities of particles suspended in a gas or fluid flow, or for determining velocity, displacement, rotation, or vibration of a solid surface, the vector velocimeter comprising a laser...

  20. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuxiao; Zhang, Jianming [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Liu, Yang, E-mail: yangl@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Huang, Hui [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Kang, Zhenhui, E-mail: zhkang@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. Black-Right-Pointing-Pointer MPCS was covalently modified by cysteine (MPCS-CO-Cys). Black-Right-Pointing-Pointer MPCS-CO-Cys was first time used in electrochemical detection of heavy metal ions. Black-Right-Pointing-Pointer Heavy metal ions such as Pb{sup 2+} and Cd{sup 2+} can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  1. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    International Nuclear Information System (INIS)

    Zhang, Yuxiao; Zhang, Jianming; Liu, Yang; Huang, Hui; Kang, Zhenhui

    2012-01-01

    Highlights: ► Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. ► MPCS was covalently modified by cysteine (MPCS–CO–Cys). ► MPCS–CO–Cys was first time used in electrochemical detection of heavy metal ions. ► Heavy metal ions such as Pb 2+ and Cd 2+ can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  2. Vectorization and improvement of nuclear codes. 3. DGR, STREAM V3.1, Cella, GGR

    Energy Technology Data Exchange (ETDEWEB)

    Nemoto, Toshiyuki; Eguchi, Norikuni; Watanabe, Hideo; Machida, Masahiko; Yokokawa, Mitsuo; Fujii, Minoru [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-01-01

    Four nuclear codes have been vectorized and improved in order to realize the high speed performance on the VP2600 supercomputer at Computing and Information Systems Center of JAERI in the fiscal year 1993. Molecular Dynamics simulation code DGR which simulates the irradiation damage on diamond crystalline, three-dimensional non-steady compressible fluid dynamics code STREAM V3.1, two-dimensional fluid simulation code using Cell Automaton model Cella and Molecular Dynamics code GGR which simulates the irradiation damage on black carbon crystalline have been vectorized and improved, respectively. Speed up ratios by the vectorization to scalar mode on VP2600 show 2.8, 6.8-14.8, 15-16 and 1.23 times for DGR, STREAM V3.1, Cella and GGR, respectively. In this report, we present vectorization techniques, vectorization effects, evaluations of the numerical results and techniques for the improvement. (author).

  3. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  4. Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids

    International Nuclear Information System (INIS)

    Jakeman, John D.; Archibald, Richard; Xiu Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  5. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca

    2013-01-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  6. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza

    2013-08-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  7. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  8. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using...

  9. Decomposition of group-velocity-locked-vector-dissipative solitons and formation of the high-order soliton structure by the product of their recombination.

    Science.gov (United States)

    Wang, Xuan; Li, Lei; Geng, Ying; Wang, Hanxiao; Su, Lei; Zhao, Luming

    2018-02-01

    By using a polarization manipulation and projection system, we numerically decomposed the group-velocity-locked-vector-dissipative solitons (GVLVDSs) from a normal dispersion fiber laser and studied the combination of the projections of the phase-modulated components of the GVLVDS through a polarization beam splitter. Pulses with a structure similar to a high-order vector soliton could be obtained, which could be considered as a pseudo-high-order GVLVDS. It is found that, although GVLVDSs are intrinsically different from group-velocity-locked-vector solitons generated in fiber lasers operated in the anomalous dispersion regime, similar characteristics for the generation of pseudo-high-order GVLVDS are obtained. However, pulse chirp plays a significant role on the generation of pseudo-high-order GVLVDS.

  10. Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement

    Directory of Open Access Journals (Sweden)

    Xianglei Liu

    2018-01-01

    Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.

  11. An irregular grid approach for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2008-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting. The method is centered around the approximation of the associated complementarity problem on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  12. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  13. Reconstruction of high-dimensional states entangled in orbital angular momentum using mutually unbiased measurements

    CSIR Research Space (South Africa)

    Giovannini, D

    2013-06-01

    Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...

  14. Three-dimensionality of field-induced magnetism in a high-temperature superconductor

    DEFF Research Database (Denmark)

    Lake, B.; Lefmann, K.; Christensen, N.B.

    2005-01-01

    Many physical properties of high-temperature superconductors are two-dimensional phenomena derived from their square-planar CuO(2) building blocks. This is especially true of the magnetism from the copper ions. As mobile charge carriers enter the CuO(2) layers, the antiferromagnetism of the parent...

  15. Finding and Visualizing Relevant Subspaces for Clustering High-Dimensional Astronomical Data Using Connected Morphological Operators

    NARCIS (Netherlands)

    Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.

    2010-01-01

    Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.

  16. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    Science.gov (United States)

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  17. Estimating the effect of a variable in a high-dimensional regression model

    DEFF Research Database (Denmark)

    Jensen, Peter Sandholt; Wurtz, Allan

    assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider  properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...

  18. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  19. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  20. Using Localised Quadratic Functions on an Irregular Grid for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose a method for pricing high-dimensional American options on an irregular grid; the method involves using quadratic functions to approximate the local effect of the Black-Scholes operator.Once such an approximation is known, one can solve the pricing problem by time stepping in an explicit

  1. An Irregular Grid Approach for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting.The method is centred around the approximation of the associated complementarity problem on an irregular grid.We approximate the partial differential operator on this grid by appealing to the SDE

  2. Pricing and hedging high-dimensional American options : an irregular grid approach

    NARCIS (Netherlands)

    Berridge, S.; Schumacher, H.

    2002-01-01

    We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  3. Cloning vector

    Science.gov (United States)

    Guilfoyle, Richard A.; Smith, Lloyd M.

    1994-01-01

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site.

  4. Cloning vector

    Science.gov (United States)

    Guilfoyle, R.A.; Smith, L.M.

    1994-12-27

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site. 2 figures.

  5. Gardening as vector of a humanization of high-rise building

    Science.gov (United States)

    Lekareva, Nina; Zaslavskaya, Anna

    2018-03-01

    Article is devoted to issue of integration of vertical gardening into structure of high-rise building in the conditions of the constrained town-planning situation. On the basis of the analysis of the existing experience of design and building of "biopositive" high-rise building ecological, town-planning, social and constructive advantages of the organization of gardens on roofs and vertical gardens are considered [1]. As the main mechanism of increase in investment appeal of high-rise building the principle of a humanization due to gardening of high-rise building taking into account requirements of ecology, energy efficiency of buildings and improvement of quality of construction with minimization of expenses and maximizing comfort moves forward. The National Standards of Green construction designed to adapt the international requirements of architecture and construction of the energy efficient, eco-friendly and comfortable building or a complex to local conditions are considered [2,3].

  6. High-charge and multiple-star vortex coronagraphy from stacked vector vortex phase masks.

    Science.gov (United States)

    Aleksanyan, Artur; Brasselet, Etienne

    2018-02-01

    Optical vortex phase masks are now installed at many ground-based large telescopes for high-contrast astronomical imaging. To date, such instrumental advances have been restricted to the use of helical phase masks of the lowest even order, while future giant telescopes will require high-order masks. Here we propose a single-stage on-axis scheme to create high-order vortex coronagraphs based on second-order vortex phase masks. By extending our approach to an off-axis design, we also explore the implementation of multiple-star vortex coronagraphy. An experimental laboratory demonstration is reported and supported by numerical simulations. These results offer a practical roadmap to the development of future coronagraphic tools with enhanced performances.

  7. Zero- and two-dimensional hybrid carbon phosphors for high colorimetric purity white light-emission.

    Science.gov (United States)

    Ding, Yamei; Chang, Qing; Xiu, Fei; Chen, Yingying; Liu, Zhengdong; Ban, Chaoyi; Cheng, Shuai; Liu, Juqing; Huang, Wei

    2018-03-01

    Carbon nanomaterials are promising phosphors for white light emission. A facile single-step synthesis method has been developed to prepare zero- and two-dimensional hybrid carbon phosphors for the first time. Zero-dimensional carbon dots (C-dots) emit bright blue luminescence under 365 nm UV light and two-dimensional nanoplates improve the dispersity and film forming ability of C-dots. As a proof-of-concept application, the as-prepared hybrid carbon phosphors emit bright white luminescence in the solid state, and the phosphor-coated blue LEDs exhibit high colorimetric purity white light-emission with a color coordinate of (0.3308, 0.3312), potentially enabling the successful application of white emitting phosphors in the LED field.

  8. Identifying individuals at high risk of psychosis: predictive utility of Support Vector Machine using structural and functional MRI data

    Directory of Open Access Journals (Sweden)

    Isabel eValli

    2016-04-01

    Full Text Available The identification of individuals at high risk of developing psychosis is entirely based on clinical assessment, associated with limited predictive potential. There is therefore increasing interest in the development of biological markers that could be used in clinical practice for this purpose. We studied 25 individuals with an At Risk Mental State for psychosis and 25 healthy controls using structural MRI, and functional MRI in conjunction with a verbal memory task. Data were analysed using a standard univariate analysis, and with Support Vector Machine (SVM, a multivariate pattern recognition technique that enables statistical inferences to be made at the level of the individual, yielding results with high translational potential. The application of SVM to structural MRI data permitted the identification of individuals at high risk of psychosis with a sensitivity of 68% and a specificity of 76%, resulting in an accuracy of 72% (p<0.001. Univariate volumetric between-group differences did not reach statistical significance. In contrast, the univariate fMRI analysis identified between-group differences (p<0.05 corrected while the application of SVM to the same data did not. Since SVM is well suited at identifying the pattern of abnormality that distinguishes two groups, whereas univariate methods are more likely to identify regions that individually are most different between two groups, our results suggest the presence of focal functional abnormalities in the context of a diffuse pattern of structural abnormalities in individuals at high clinical risk of psychosis.

  9. Determination of the components of three dimensional vector and tensor anisotropy of cosmic radiation with application to the results of the Musala experiment

    International Nuclear Information System (INIS)

    Somogyi, A.J.

    1976-09-01

    The paper proves that it is possible to interpret the experimental results of the Musala experiment as being consequences of a vector anisotropy with maximum in the direction of the galactic centre and a tensor anisotropy with principal axes in the physically plausible directions of the galactic arm, the normal direction of the galactic plane and the direction perpendicular them, respectively. It is underlined that the interpretation is not the only possible one and, in addition to this, statistical errors are rather large. The results favour the galactic origin of the particles concerned (E=6x10 13 eV). (Sz.N.Z.)

  10. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    Science.gov (United States)

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  11. Creating Realistic 3D Graphics with Excel at High School--Vector Algebra in Practice

    Science.gov (United States)

    Benacka, Jan

    2015-01-01

    The article presents the results of an experiment in which Excel applications that depict rotatable and sizable orthographic projection of simple 3D figures with face overlapping were developed with thirty gymnasium (high school) students of age 17-19 as an introduction to 3D computer graphics. A questionnaire survey was conducted to find out…

  12. Vector magnetic field inversions of high cadence SOLIS-VSM data

    NARCIS (Netherlands)

    Fischer, C.E.; Keller, C.U.; Snik, F.

    2007-01-01

    We have processed full Stokes observations from the SOLIS VSM in the photospheric lines Fe I 630.15 nm and 630.25 nm. The data sets have high spectral and temporal resolution, moderate spatial resolution, and large polarimetric sensitivity and accuracy. We used the LILIA, an LTE code written by

  13. Thermal Investigation of Three-Dimensional GaN-on-SiC High Electron Mobility Transistors

    Science.gov (United States)

    2017-07-01

    University of L’Aquila, (2011). 23 Rao, H. & Bosman, G. Hot-electron induced defect generation in AlGaN/GaN high electron mobility transistors. Solid...AFRL-RY-WP-TR-2017-0143 THERMAL INVESTIGATION OF THREE- DIMENSIONAL GaN-on-SiC HIGH ELECTRON MOBILITY TRANSISTORS Qing Hao The University of Arizona...clarification memorandum dated 16 Jan 09. This report is available to the general public, including foreign nationals. Copies may be obtained from the

  14. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    Science.gov (United States)

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  15. Heterologous prime-boost immunization of Newcastle disease virus vectored vaccines protected broiler chickens against highly pathogenic avian influenza and Newcastle disease viruses.

    Science.gov (United States)

    Kim, Shin-Hee; Samal, Siba K

    2017-07-24

    Avian Influenza virus (AIV) is an important pathogen for both human and animal health. There is a great need to develop a safe and effective vaccine for AI infections in the field. Live-attenuated Newcastle disease virus (NDV) vectored AI vaccines have shown to be effective, but preexisting antibodies to the vaccine vector can affect the protective efficacy of the vaccine in the field. To improve the efficacy of AI vaccine, we generated a novel vectored vaccine by using a chimeric NDV vector that is serologically distant from NDV. In this study, the protective efficacy of our vaccines was evaluated by using H5N1 highly pathogenic avian influenza virus (HPAIV) strain A/Vietnam/1203/2004, a prototype strain for vaccine development. The vaccine viruses were three chimeric NDVs expressing the hemagglutinin (HA) protein in combination with the neuraminidase (NA) protein, matrix 1 protein, or nonstructural 1 protein. Comparison of their protective efficacy between a single and prime-boost immunizations indicated that prime immunization of 1-day-old SPF chicks with our vaccine viruses followed by boosting with the conventional NDV vector strain LaSota expressing the HA protein provided complete protection of chickens against mortality, clinical signs and virus shedding. Further verification of our heterologous prime-boost immunization using commercial broiler chickens suggested that a sequential immunization of chickens with chimeric NDV vector expressing the HA and NA proteins following the boost with NDV vector expressing the HA protein can be a promising strategy for the field vaccination against HPAIVs and against highly virulent NDVs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. The employment of Support Vector Machine to classify high and low performance archers based on bio-physiological variables

    Science.gov (United States)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Amirul Abdullah, Muhammad; Hasnun Arif Hassan, Mohd; Khalil, Zubair

    2018-04-01

    The present study employs a machine learning algorithm namely support vector machine (SVM) to classify high and low potential archers from a collection of bio-physiological variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. The bio-physiological variables namely resting heart rate, resting respiratory rate, resting diastolic blood pressure, resting systolic blood pressure, as well as calories intake, were measured prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models i.e. linear, quadratic and cubic kernel functions, were trained on the aforementioned variables. The k-means clustered the archers into high (HPA) and low potential archers (LPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy with a classification accuracy of 94% in comparison the other tested models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected bio-physiological variables examined.

  17. The identification of high potential archers based on relative psychological coping skills variables: A Support Vector Machine approach

    Science.gov (United States)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, A. P. P. Abdul; Razali Abdullah, Mohamad; Aizzat Zakaria, Muhammad; Muaz Alim, Muhammad; Arif Mat Jizat, Jessnor; Fauzi Ibrahim, Mohamad

    2018-03-01

    Support Vector Machine (SVM) has been revealed to be a powerful learning algorithm for classification and prediction. However, the use of SVM for prediction and classification in sport is at its inception. The present study classified and predicted high and low potential archers from a collection of psychological coping skills variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. Psychological coping skills inventory which evaluates the archers level of related coping skills were filled out by the archers prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models, i.e. linear and fine radial basis function (RBF) kernel functions, were trained on the psychological variables. The k-means clustered the archers into high psychologically prepared archers (HPPA) and low psychologically prepared archers (LPPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy and precision throughout the exercise with an accuracy of 92% and considerably fewer error rate for the prediction of the HPPA and the LPPA as compared to the fine RBF SVM. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected psychological coping skills variables examined which would consequently save time and energy during talent identification and development programme.

  18. Scattering of massless vector, tensor, and other particles in string theory at high energy

    International Nuclear Information System (INIS)

    Antonov, E.N.

    1990-01-01

    The 2 → 2 and 2 → 3 processes are studied in the multi-Regge kinematics for gluons and gravitons, the first excited states of the open and closed strings. The factorization of the corresponding amplitudes is demonstrated. Explicit relations generalizing the Low-Gribov expressions are obtained in the kinematics where one of the external particles is produced with small transverse momentum. The expressions in the limit α' → 0 coincide with the results of Yang-Mills theory and gravitation at high energies

  19. High Precision Measurement of the differential vector boson cross-sections with the ATLAS detector

    CERN Document Server

    Armbruster, Aaron James; The ATLAS collaboration

    2017-01-01

    Measurements of the Drell-Yan production of W and Z/gamma bosons at the LHC provide a benchmark of our understanding of perturbative QCD and probe the proton structure in a unique way. The ATLAS collaboration has performed new high precision measurements at center-of-mass energies of 7. The measurements are performed for W+, W- and Z/gamma bosons integrated and as a function of the boson or lepton rapidity and the Z/gamma* mass. Unprecedented precision is reached and strong constraints on Parton Distribution functions, in particular the strange density are found. Z cross sections are also measured at center-of-mass energies of 8 eV and 13TeV, and cross-section ratios to the top-quark pair production have been derived. This ratio measurement leads to a cancellation of systematic effects and allows for a high precision comparison to the theory predictions. The cross section of single W events has also been measured precisely at center-of-mass energies of 8TeV and 13TeV and the W charge asymmetry has been determ...

  20. Generalized reduced rank latent factor regression for high dimensional tensor fields, and neuroimaging-genetic applications.

    Science.gov (United States)

    Tao, Chenyang; Nichols, Thomas E; Hua, Xue; Ching, Christopher R K; Rolls, Edmund T; Thompson, Paul M; Feng, Jianfeng

    2017-01-01

    We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. Copyright © 2016. Published by Elsevier Inc.

  1. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-25

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of the hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.

  2. Victims and vectors: highly pathogenic avian influenza H5N1 and the ecology of wild birds

    Science.gov (United States)

    Takekawa, John Y.; Prosser, Diann J.; Newman, Scott H.; Muzaffar, Sabir Bin; Hill, Nichola J.; Yan, Baoping; Xiao, Xiangming; Lei, Fumin; Li, Tianxian; Schwarzbach, Steven E.; Howell, Judd A.

    2010-01-01

    The emergence of highly pathogenic avian influenza (HPAI) viruses has raised concerns about the role of wild birds in the spread and persistence of the disease. In 2005, an outbreak of the highly pathogenic subtype H5N1 killed more than 6,000 wild waterbirds at Qinghai Lake, China. Outbreaks have continued to periodically occur in wild birds at Qinghai Lake and elsewhere in Central China and Mongolia. This region has few poultry but is a major migration and breeding area for waterbirds in the Central Asian Flyway, although relatively little is known about migratory movements of different species and connectivity of their wetland habitats. The scientific debate has focused on the role of waterbirds in the epidemiology, maintenance and spread of HPAI H5N1: to what extent are they victims affected by the disease, or vectors that have a role in disease transmission? In this review, we summarise the current knowledge of wild bird involvement in the ecology of HPAI H5N1. Specifically, we present details on: (1) origin of HPAI H5N1; (2) waterbirds as LPAI reservoirs and evolution into HPAI; (3) the role of waterbirds in virus spread and persistence; (4) key biogeographic regions of outbreak; and (5) applying an ecological research perspective to studying AIVs in wild waterbirds and their ecosystems.

  3. Dissecting high-dimensional phenotypes with bayesian sparse factor analysis of genetic covariance matrices.

    Science.gov (United States)

    Runcie, Daniel E; Mukherjee, Sayan

    2013-07-01

    Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.

  4. Three-dimensional true FISP for high-resolution imaging of the whole brain

    International Nuclear Information System (INIS)

    Schmitz, B.; Hagen, T.; Reith, W.

    2003-01-01

    While high-resolution T1-weighted sequences, such as three-dimensional magnetization-prepared rapid gradient-echo imaging, are widely available, there is a lack of an equivalent fast high-resolution sequence providing T2 contrast. Using fast high-performance gradient systems we show the feasibility of three-dimensional true fast imaging with steady-state precession (FISP) to fill this gap. We applied a three-dimensional true-FISP protocol with voxel sizes down to 0.5 x 0.5 x 0.5 mm and acquisition times of approximately 8 min on a 1.5-T Sonata (Siemens, Erlangen, Germany) magnetic resonance scanner. The sequence was included into routine brain imaging protocols for patients with cerebrospinal-fluid-related intracranial pathology. Images from 20 patients and 20 healthy volunteers were evaluated by two neuroradiologists with respect to diagnostic image quality and artifacts. All true-FISP scans showed excellent imaging quality free of artifacts in patients and volunteers. They were valuable for the assessment of anatomical and pathologic aspects of the included patients. High-resolution true-FISP imaging is a valuable adjunct for the exploration and neuronavigation of intracranial pathologies especially if cerebrospinal fluid is involved. (orig.)

  5. Volume scanning three-dimensional display with an inclined two-dimensional display and a mirror scanner

    Science.gov (United States)

    Miyazaki, Daisuke; Kawanishi, Tsuyoshi; Nishimura, Yasuhiro; Matsushita, Kenji

    2001-11-01

    A new three-dimensional display system based on a volume-scanning method is demonstrated. To form a three-dimensional real image, an inclined two-dimensional image is rapidly moved with a mirror scanner while the cross-section patterns of a three-dimensional object are displayed sequentially. A vector-scan CRT display unit is used to obtain a high-resolution image. An optical scanning system is constructed with concave mirrors and a galvanometer mirror. It is confirmed that three-dimensional images, formed by the experimental system, satisfy all the criteria for human stereoscopic vision.

  6. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    International Nuclear Information System (INIS)

    Snyder, Abigail C.; Jiao, Yu

    2010-01-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  7. A Dissimilarity Measure for Clustering High- and Infinite Dimensional Data that Satisfies the Triangle Inequality

    Science.gov (United States)

    Socolovsky, Eduardo A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The cosine or correlation measures of similarity used to cluster high dimensional data are interpreted as projections, and the orthogonal components are used to define a complementary dissimilarity measure to form a similarity-dissimilarity measure pair. Using a geometrical approach, a number of properties of this pair is established. This approach is also extended to general inner-product spaces of any dimension. These properties include the triangle inequality for the defined dissimilarity measure, error estimates for the triangle inequality and bounds on both measures that can be obtained with a few floating-point operations from previously computed values of the measures. The bounds and error estimates for the similarity and dissimilarity measures can be used to reduce the computational complexity of clustering algorithms and enhance their scalability, and the triangle inequality allows the design of clustering algorithms for high dimensional distributed data.

  8. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    Science.gov (United States)

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. One- and two-dimensional sublattices as preconditions for high-Tc superconductivity

    International Nuclear Information System (INIS)

    Krueger, E.

    1989-01-01

    In an earlier paper it was proposed describing superconductivity in the framework of a nonadiabatic Heisenberg model in order to interprete the outstanding symmetry proper ties of the (spin-dependent) Wannier functions in the conduction bands of superconductors. This new group-theoretical model suggests that Cooper pair formation can only be mediated by boson excitations carrying crystal-spin-angular momentum. While in the three-dimensionally isotropic lattices of the standard superconductors phonons are able to transport crystal-spin-angular momentum, this is not true for phonons propagating through the one- or two-dimensional Cu-O sublattices of the high-T c compounds. Therefore, if such an anisotropic material is superconducting, it is necessarily higher-energetic excitations (of well-defined symmetry) which mediate pair formation. This fact is proposed being responsible for the high transition temperatures of these compounds. (author)

  10. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  11. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    Science.gov (United States)

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  12. The identification of high potential archers based on fitness and motor ability variables: A Support Vector Machine approach.

    Science.gov (United States)

    Taha, Zahari; Musa, Rabiu Muazu; P P Abdul Majeed, Anwar; Alim, Muhammad Muaz; Abdullah, Mohamad Razali

    2018-02-01

    Support Vector Machine (SVM) has been shown to be an effective learning algorithm for classification and prediction. However, the application of SVM for prediction and classification in specific sport has rarely been used to quantify/discriminate low and high-performance athletes. The present study classified and predicted high and low-potential archers from a set of fitness and motor ability variables trained on different SVMs kernel algorithms. 50 youth archers with the mean age and standard deviation of 17.0 ± 0.6 years drawn from various archery programmes completed a six arrows shooting score test. Standard fitness and ability measurements namely hand grip, vertical jump, standing broad jump, static balance, upper muscle strength and the core muscle strength were also recorded. Hierarchical agglomerative cluster analysis (HACA) was used to cluster the archers based on the performance variables tested. SVM models with linear, quadratic, cubic, fine RBF, medium RBF, as well as the coarse RBF kernel functions, were trained based on the measured performance variables. The HACA clustered the archers into high-potential archers (HPA) and low-potential archers (LPA), respectively. The linear, quadratic, cubic, as well as the medium RBF kernel functions models, demonstrated reasonably excellent classification accuracy of 97.5% and 2.5% error rate for the prediction of the HPA and the LPA. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from a combination of the selected few measured fitness and motor ability performance variables examined which would consequently save cost, time and effort during talent identification programme. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Tétreault, Nicolas

    2011-11-09

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  14. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Té treault, Nicolas; Arsenault, É ric; Heiniger, Leo-Philipp; Soheilnia, Navid; Brillet, Jé ré mie; Moehl, Thomas; Zakeeruddin, Shaik; Ozin, Geoffrey A.; Grä tzel, Michael

    2011-01-01

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  15. Cooperative simulation of lithography and topography for three-dimensional high-aspect-ratio etching

    Science.gov (United States)

    Ichikawa, Takashi; Yagisawa, Takashi; Furukawa, Shinichi; Taguchi, Takafumi; Nojima, Shigeki; Murakami, Sadatoshi; Tamaoki, Naoki

    2018-06-01

    A topography simulation of high-aspect-ratio etching considering transports of ions and neutrals is performed, and the mechanism of reactive ion etching (RIE) residues in three-dimensional corner patterns is revealed. Limited ion flux and CF2 diffusion from the wide space of the corner is found to have an effect on the RIE residues. Cooperative simulation of lithography and topography is used to solve the RIE residue problem.

  16. Reduced, three-dimensional, nonlinear equations for high-β plasmas including toroidal effects

    International Nuclear Information System (INIS)

    Schmalz, R.

    1980-11-01

    The resistive MHD equations for toroidal plasma configurations are reduced by expanding to the second order in epsilon, the inverse aspect ratio, allowing for high β = μsub(o)p/B 2 of order epsilon. The result is a closed system of nonlinear, three-dimensional equations where the fast magnetohydrodynamic time scale is eliminated. In particular, the equation for the toroidal velocity remains decoupled. (orig.)

  17. Two and dimensional heat analysis inside a high pressure electrical discharge tube

    International Nuclear Information System (INIS)

    Aghanajafi, C.; Dehghani, A. R.; Fallah Abbasi, M.

    2005-01-01

    This article represents the heat transfer analysis for a horizontal high pressure mercury steam tube. To get a more realistic numerical simulation, heat radiation at different wavelength width bands, has been used besides convection and conduction heat transfer. The analysis for different gases with different pressure in two and three dimensional cases has been investigated and the results compared with empirical and semi empirical values. The effect of the environmental temperature on the arc tube temperature is also studied

  18. Controlling chaos in low and high dimensional systems with periodic parametric perturbations

    International Nuclear Information System (INIS)

    Mirus, K.A.; Sprott, J.C.

    1998-06-01

    The effect of applying a periodic perturbation to an accessible parameter of various chaotic systems is examined. Numerical results indicate that perturbation frequencies near the natural frequencies of the unstable periodic orbits of the chaotic systems can result in limit cycles for relatively small perturbations. Such perturbations can also control or significantly reduce the dimension of high-dimensional systems. Initial application to the control of fluctuations in a prototypical magnetic fusion plasma device will be reviewed

  19. GAMLSS for high-dimensional data – a flexible approach based on boosting

    OpenAIRE

    Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas; Schmid, Matthias

    2010-01-01

    Generalized additive models for location, scale and shape (GAMLSS) are a popular semi-parametric modelling approach that, in contrast to conventional GAMs, regress not only the expected mean but every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSS are infeasible for high-dimensional data setups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algo...

  20. Preface [HD3-2015: International meeting on high-dimensional data-driven science

    International Nuclear Information System (INIS)

    2016-01-01

    A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)

  1. Vector geometry

    CERN Document Server

    Robinson, Gilbert de B

    2011-01-01

    This brief undergraduate-level text by a prominent Cambridge-educated mathematician explores the relationship between algebra and geometry. An elementary course in plane geometry is the sole requirement for Gilbert de B. Robinson's text, which is the result of several years of teaching and learning the most effective methods from discussions with students. Topics include lines and planes, determinants and linear equations, matrices, groups and linear transformations, and vectors and vector spaces. Additional subjects range from conics and quadrics to homogeneous coordinates and projective geom

  2. Theoretical study for aerial image intensity in resist in high numerical aperture projection optics and experimental verification with one-dimensional patterns

    Science.gov (United States)

    Shibuya, Masato; Takada, Akira; Nakashima, Toshiharu

    2016-04-01

    In optical lithography, high-performance exposure tools are indispensable to obtain not only fine patterns but also preciseness in pattern width. Since an accurate theoretical method is necessary to predict these values, some pioneer and valuable studies have been proposed. However, there might be some ambiguity or lack of consensus regarding the treatment of diffraction by object, incoming inclination factor onto image plane in scalar imaging theory, and paradoxical phenomenon of the inclined entrance plane wave onto image in vector imaging theory. We have reconsidered imaging theory in detail and also phenomenologically resolved the paradox. By comparing theoretical aerial image intensity with experimental pattern width for one-dimensional pattern, we have validated our theoretical consideration.

  3. A Simple and High Performing Rate Control Initialization Method for H.264 AVC Coding Based on Motion Vector Map and Spatial Complexity at Low Bitrate

    Directory of Open Access Journals (Sweden)

    Yalin Wu

    2014-01-01

    Full Text Available The temporal complexity of video sequences can be characterized by motion vector map which consists of motion vectors of each macroblock (MB. In order to obtain the optimal initial QP (quantization parameter for the various video sequences which have different spatial and temporal complexities, this paper proposes a simple and high performance initial QP determining method based on motion vector map and temporal complexity to decide an initial QP in given target bit rate. The proposed algorithm produces the reconstructed video sequences with outstanding and stable quality. For any video sequences, the initial QP can be easily determined from matrices by target bit rate and mapped spatial complexity using proposed mapping method. Experimental results show that the proposed algorithm can show more outstanding objective and subjective performance than other conventional determining methods.

  4. Germline Cas9 expression yields highly efficient genome engineering in a major worldwide disease vector, Aedes aegypti.

    Science.gov (United States)

    Li, Ming; Bui, Michelle; Yang, Ting; Bowman, Christian S; White, Bradley J; Akbari, Omar S

    2017-12-05

    The development of CRISPR/Cas9 technologies has dramatically increased the accessibility and efficiency of genome editing in many organisms. In general, in vivo germline expression of Cas9 results in substantially higher activity than embryonic injection. However, no transgenic lines expressing Cas9 have been developed for the major mosquito disease vector Aedes aegypti Here, we describe the generation of multiple stable, transgenic Ae. aegypti strains expressing Cas9 in the germline, resulting in dramatic improvements in both the consistency and efficiency of genome modifications using CRISPR. Using these strains, we disrupted numerous genes important for normal morphological development, and even generated triple mutants from a single injection. We have also managed to increase the rates of homology-directed repair by more than an order of magnitude. Given the exceptional mutagenic efficiency and specificity of the Cas9 strains we engineered, they can be used for high-throughput reverse genetic screens to help functionally annotate the Ae. aegypti genome. Additionally, these strains represent a step toward the development of novel population control technologies targeting Ae. aegypti that rely on Cas9-based gene drives. Copyright © 2017 the Author(s). Published by PNAS.

  5. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    Science.gov (United States)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  6. Calculus with vectors

    CERN Document Server

    Treiman, Jay S

    2014-01-01

    Calculus with Vectors grew out of a strong need for a beginning calculus textbook for undergraduates who intend to pursue careers in STEM. fields. The approach introduces vector-valued functions from the start, emphasizing the connections between one-variable and multi-variable calculus. The text includes early vectors and early transcendentals and includes a rigorous but informal approach to vectors. Examples and focused applications are well presented along with an abundance of motivating exercises. All three-dimensional graphs have rotatable versions included as extra source materials and may be freely downloaded and manipulated with Maple Player; a free Maple Player App is available for the iPad on iTunes. The approaches taken to topics such as the derivation of the derivatives of sine and cosine, the approach to limits, and the use of "tables" of integration have been modified from the standards seen in other textbooks in order to maximize the ease with which students may comprehend the material. Additio...

  7. VECTOR INTEGRATION

    NARCIS (Netherlands)

    Thomas, E. G. F.

    2012-01-01

    This paper deals with the theory of integration of scalar functions with respect to a measure with values in a, not necessarily locally convex, topological vector space. It focuses on the extension of such integrals from bounded measurable functions to the class of integrable functions, proving

  8. High-definition resolution three-dimensional imaging systems in laparoscopic radical prostatectomy: randomized comparative study with high-definition resolution two-dimensional systems.

    Science.gov (United States)

    Kinoshita, Hidefumi; Nakagawa, Ken; Usui, Yukio; Iwamura, Masatsugu; Ito, Akihiro; Miyajima, Akira; Hoshi, Akio; Arai, Yoichi; Baba, Shiro; Matsuda, Tadashi

    2015-08-01

    Three-dimensional (3D) imaging systems have been introduced worldwide for surgical instrumentation. A difficulty of laparoscopic surgery involves converting two-dimensional (2D) images into 3D images and depth perception rearrangement. 3D imaging may remove the need for depth perception rearrangement and therefore have clinical benefits. We conducted a multicenter, open-label, randomized trial to compare the surgical outcome of 3D-high-definition (HD) resolution and 2D-HD imaging in laparoscopic radical prostatectomy (LRP), in order to determine whether an LRP under HD resolution 3D imaging is superior to that under HD resolution 2D imaging in perioperative outcome, feasibility, and fatigue. One-hundred twenty-two patients were randomly assigned to a 2D or 3D group. The primary outcome was time to perform vesicourethral anastomosis (VUA), which is technically demanding and may include a number of technical difficulties considered in laparoscopic surgeries. VUA time was not significantly shorter in the 3D group (26.7 min, mean) compared with the 2D group (30.1 min, mean) (p = 0.11, Student's t test). However, experienced surgeons and 3D-HD imaging were independent predictors for shorter VUA times (p = 0.000, p = 0.014, multivariate logistic regression analysis). Total pneumoperitoneum time was not different. No conversion case from 3D to 2D or LRP to open RP was observed. Fatigue was evaluated by a simulation sickness questionnaire and critical flicker frequency. Results were not different between the two groups. Subjective feasibility and satisfaction scores were significantly higher in the 3D group. Using a 3D imaging system in LRP may have only limited advantages in decreasing operation times over 2D imaging systems. However, the 3D system increased surgical feasibility and decreased surgeons' effort levels without inducing significant fatigue.

  9. Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2009-01-01

    Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold

  10. Dimensional measurement of micro parts with high aspect ratio in HIT-UOI

    Science.gov (United States)

    Dang, Hong; Cui, Jiwen; Feng, Kunpeng; Li, Junying; Zhao, Shiyuan; Zhang, Haoran; Tan, Jiubin

    2016-11-01

    Micro parts with high aspect ratios have been widely used in different fields including aerospace and defense industries, while the dimensional measurement of these micro parts becomes a challenge in the field of precision measurement and instrument. To deal with this contradiction, several probes for the micro parts precision measurement have been proposed by researchers in Center of Ultra-precision Optoelectronic Instrument (UOI), Harbin Institute of Technology (HIT). In this paper, optical fiber probes with structures of spherical coupling(SC) with double optical fibers, micro focal-length collimation (MFL-collimation) and fiber Bragg grating (FBG) are described in detail. After introducing the sensing principles, both advantages and disadvantages of these probes are analyzed respectively. In order to improve the performances of these probes, several approaches are proposed. A two-dimensional orthogonal path arrangement is propounded to enhance the dimensional measurement ability of MFL-collimation probes, while a high resolution and response speed interrogation method based on differential method is used to improve the accuracy and dynamic characteristics of the FBG probes. The experiments for these special structural fiber probes are given with a focus on the characteristics of these probes, and engineering applications will also be presented to prove the availability of them. In order to improve the accuracy and the instantaneity of the engineering applications, several techniques are used in probe integration. The effectiveness of these fiber probes were therefore verified through both the analysis and experiments.

  11. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    Science.gov (United States)

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  12. High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.

    Science.gov (United States)

    Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton

    2017-11-03

    Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.

  13. A new methodology for studying dynamics of aerosol particles in sneeze and cough using a digital high-vision, high-speed video system and vector analyses.

    Directory of Open Access Journals (Sweden)

    Hidekazu Nishimura

    Full Text Available Microbial pathogens of respiratory infectious diseases are often transmitted through particles in sneeze and cough. Therefore, understanding the particle movement is important for infection control. Images of a sneeze induced by nasal cavity stimulation by healthy adult volunteers, were taken by a digital high-vision, high-speed video system equipped with a computer system and treated as a research model. The obtained images were enhanced electronically, converted to digital images every 1/300 s, and subjected to vector analysis of the bioparticles contained in the whole sneeze cloud using automatic image processing software. The initial velocity of the particles or their clusters in the sneeze was greater than 6 m/s, but decreased as the particles moved forward; the momentums of the particles seemed to be lost by 0.15-0.20 s and started a diffusion movement. An approximate equation of a function of elapsed time for their velocity was obtained from the vector analysis to represent the dynamics of the front-line particles. This methodology was also applied for a cough. Microclouds contained in a smoke exhaled with a voluntary cough by a volunteer after smoking one breath of cigarette, were traced as the visible, aerodynamic surrogates for invisible bioparticles of cough. The smoke cough microclouds had an initial velocity greater than 5 m/s. The fastest microclouds were located at the forefront of cloud mass that moving forward; however, their velocity clearly decreased after 0.05 s and they began to diffuse in the environmental airflow. The maximum direct reaches of the particles and microclouds driven by sneezing and coughing unaffected by environmental airflows were estimated by calculations using the obtained equations to be about 84 cm and 30 cm from the mouth, respectively, both achieved in about 0.2 s, suggesting that data relating to the dynamics of sneeze and cough became available by calculation.

  14. CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Malgorzata Nowicka

    2017-05-01

    Full Text Available High dimensional mass and flow cytometry (HDCyto experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots, reporting of clustering results (dimensionality reduction, heatmaps with dendrograms and differential analyses (e.g. plots of aggregated signals.

  15. High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1984-01-01

    Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)

  16. Preparation of three-dimensional graphene foam for high performance supercapacitors

    Directory of Open Access Journals (Sweden)

    Yunjie Ping

    2017-04-01

    Full Text Available Supercapacitor is a new type of energy-storage device, and has been attracted widely attentions. As a two dimensional (2D nanomaterials, graphene is considered to be a promising material of supercapacitor because of its excellent properties involving high electrical conductivity and large surface area. In this paper, the large-scale graphene is successfully fabricated via environmental-friendly electrochemical exfoliation of graphite, and then, the three dimensional (3D graphene foam is prepared by using nickel foam as template and FeCl3/HCl solution as etchant. Compared with the regular 2D graphene paper, the 3D graphene foam electrode shows better electrochemical performance, and exhibits the largest specific capacitance of approximately 128 F/g at the current density of 1 A/g in 6 M KOH electrolyte. It is expected that the 3D graphene foam will have a potential application in the supercapacitors.

  17. Sensitivity of Support Vector Machine Predictions of Passive Microwave Brightness Temperature Over Snow-covered Terrain in High Mountain Asia

    Science.gov (United States)

    Ahmad, J. A.; Forman, B. A.

    2017-12-01

    High Mountain Asia (HMA) serves as a water supply source for over 1.3 billion people, primarily in south-east Asia. Most of this water originates as snow (or ice) that melts during the summer months and contributes to the run-off downstream. In spite of its critical role, there is still considerable uncertainty regarding the total amount of snow in HMA and its spatial and temporal variation. In this study, the NASA Land Information Systems (LIS) is used to model the hydrologic cycle over the Indus basin. In addition, the ability of support vector machines (SVM), a machine learning technique, to predict passive microwave brightness temperatures at a specific frequency and polarization as a function of LIS-derived land surface model output is explored in a sensitivity analysis. Multi-frequency, multi-polarization passive microwave brightness temperatures as measured by the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) over the Indus basin are used as training targets during the SVM training process. Normalized sensitivity coefficients (NSC) are then computed to assess the sensitivity of a well-trained SVM to each LIS-derived state variable. Preliminary results conform with the known first-order physics. For example, input states directly linked to physical temperature like snow temperature, air temperature, and vegetation temperature have positive NSC's whereas input states that increase volume scattering such as snow water equivalent or snow density yield negative NSC's. Air temperature exhibits the largest sensitivity coefficients due to its inherent, high-frequency variability. Adherence of this machine learning algorithm to the first-order physics bodes well for its potential use in LIS as the observation operator within a radiance data assimilation system aimed at improving regional- and continental-scale snow estimates.

  18. Four-dimensional (4D) tracking of high-temperature microparticles

    International Nuclear Information System (INIS)

    Wang, Zhehui; Liu, Q.; Waganaar, W.; Fontanese, J.; James, D.; Munsat, T.

    2016-01-01

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  19. Hierarchical one-dimensional ammonium nickel phosphate microrods for high-performance pseudocapacitors

    CSIR Research Space (South Africa)

    Raju, K

    2015-12-01

    Full Text Available :17629 | DOI: 10.1038/srep17629 www.nature.com/scientificreports Hierarchical One-Dimensional Ammonium Nickel Phosphate Microrods for High-Performance Pseudocapacitors Kumar Raju1 & Kenneth I. Ozoemena1,2 High-performance electrochemical capacitors... OPEN w w w . n a t u r e . c o m / s c i e n t i f i c r e p o r t s / 2S C I E N T I F I C REPORTS | 5:17629 | DOI: 10.1038/srep17629 Hierarchical 1-D and 2-D materials maximize the supercapacitive properties due to their unique ability to permit ion...

  20. On the use of multi-dimensional scaling and electromagnetic tracking in high dose rate brachytherapy

    Science.gov (United States)

    Götz, Th I.; Ermer, M.; Salas-González, D.; Kellermeier, M.; Strnad, V.; Bert, Ch; Hensel, B.; Tomé, A. M.; Lang, E. W.

    2017-10-01

    High dose rate brachytherapy affords a frequent reassurance of the precise dwell positions of the radiation source. The current investigation proposes a multi-dimensional scaling transformation of both data sets to estimate dwell positions without any external reference. Furthermore, the related distributions of dwell positions are characterized by uni—or bi—modal heavy—tailed distributions. The latter are well represented by α—stable distributions. The newly proposed data analysis provides dwell position deviations with high accuracy, and, furthermore, offers a convenient visualization of the actual shapes of the catheters which guide the radiation source during the treatment.

  1. High-dimensional data: p >> n in mathematical statistics and bio-medical applications

    OpenAIRE

    Van De Geer, Sara A.; Van Houwelingen, Hans C.

    2004-01-01

    The workshop 'High-dimensional data: p >> n in mathematical statistics and bio-medical applications' was held at the Lorentz Center in Leiden from 9 to 20 September 2002. This special issue of Bernoulli contains a selection of papers presented at that workshop. ¶ The introduction of high-throughput micro-array technology to measure gene-expression levels and the publication of the pioneering paper by Golub et al. (1999) has brought to life a whole new branch of data analysis under the name of...

  2. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  3. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    Science.gov (United States)

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  4. Flight-Determined Subsonic Longitudinal Stability and Control Derivatives of the F-18 High Angle of Attack Research Vehicle (HARV) with Thrust Vectoring

    Science.gov (United States)

    Iliff, Kenneth W.; Wang, Kon-Sheng Charles

    1997-01-01

    The subsonic longitudinal stability and control derivatives of the F-18 High Angle of Attack Research Vehicle (HARV) are extracted from dynamic flight data using a maximum likelihood parameter identification technique. The technique uses the linearized aircraft equations of motion in their continuous/discrete form and accounts for state and measurement noise as well as thrust-vectoring effects. State noise is used to model the uncommanded forcing function caused by unsteady aerodynamics over the aircraft, particularly at high angles of attack. Thrust vectoring was implemented using electrohydraulically-actuated nozzle postexit vanes and a specialized research flight control system. During maneuvers, a control system feature provided independent aerodynamic control surface inputs and independent thrust-vectoring vane inputs, thereby eliminating correlations between the aircraft states and controls. Substantial variations in control excitation and dynamic response were exhibited for maneuvers conducted at different angles of attack. Opposing vane interactions caused most thrust-vectoring inputs to experience some exhaust plume interference and thus reduced effectiveness. The estimated stability and control derivatives are plotted, and a discussion relates them to predicted values and maneuver quality.

  5. Five-dimensional visualization of phase transition in BiNiO3 under high pressure

    International Nuclear Information System (INIS)

    Liu, Yijin; Wang, Junyue; Yang, Wenge; Azuma, Masaki; Mao, Wendy L.

    2014-01-01

    Colossal negative thermal expansion was recently discovered in BiNiO 3 associated with a low density to high density phase transition under high pressure. The varying proportion of co-existing phases plays a key role in the macroscopic behavior of this material. Here, we utilize a recently developed X-ray Absorption Near Edge Spectroscopy Tomography method and resolve the mixture of high/low pressure phases as a function of pressure at tens of nanometer resolution taking advantage of the charge transfer during the transition. This five-dimensional (X, Y, Z, energy, and pressure) visualization of the phase boundary provides a high resolution method to study the interface dynamics of high/low pressure phase

  6. Characterization of differentially expressed genes using high-dimensional co-expression networks

    DEFF Research Database (Denmark)

    Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.

    2010-01-01

    We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...

  7. High-resolution coherent three-dimensional spectroscopy of Br2.

    Science.gov (United States)

    Chen, Peter C; Wells, Thresa A; Strangfeld, Benjamin R

    2013-07-25

    In the past, high-resolution spectroscopy has been limited to small, simple molecules that yield relatively uncongested spectra. Larger and more complex molecules have a higher density of peaks and are susceptible to complications (e.g., effects from conical intersections) that can obscure the patterns needed to resolve and assign peaks. Recently, high-resolution coherent two-dimensional (2D) spectroscopy has been used to resolve and sort peaks into easily identifiable patterns for molecules where pattern-recognition has been difficult. For very highly congested spectra, however, the ability to resolve peaks using coherent 2D spectroscopy is limited by the bandwidth of instrumentation. In this article, we introduce and investigate high-resolution coherent three-dimensional spectroscopy (HRC3D) as a method for dealing with heavily congested systems. The resulting patterns are unlike those in high-resolution coherent 2D spectra. Analysis of HRC3D spectra could provide a means for exploring the spectroscopy of large and complex molecules that have previously been considered too difficult to study.

  8. Three-dimensional graphene/polyaniline composite material for high-performance supercapacitor applications

    International Nuclear Information System (INIS)

    Liu, Huili; Wang, Yi; Gou, Xinglong; Qi, Tao; Yang, Jun; Ding, Yulong

    2013-01-01

    Highlights: ► A novel 3D graphene showed high specific surface area and large mesopore volume. ► Aniline monomer was polymerized in the presence of 3D graphene at room temperature. ► The supercapacitive properties were studied by CV and charge–discharge tests. ► The composite show a high gravimetric capacitance and good cyclic stability. ► The 3D graphene/polyaniline has never been report before our work. -- Abstract: A novel three-dimensional (3D) graphene/polyaniline nanocomposite material which is synthesized using in situ polymerization of aniline monomer on the graphene surface is reported as an electrode for supercapacitors. The morphology and structure of the material are characterized by scanning electron microscopy (SEM), transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD). The electrochemical properties of the resulting materials are systematically studied using cyclic voltammetry (CV) and constant current charge–discharge tests. A high gravimetric capacitance of 463 F g −1 at a scan rate of 1 mV s −1 is obtained by means of CVs with 3 mol L −1 KOH as the electrolyte. In addition, the composite material shows only 9.4% capacity loss after 500 cycles, indicating better cyclic stability for supercapacitor applications. The high specific surface area, large mesopore volume and three-dimensional nanoporous structure of 3D graphene could contribute to the high specific capacitance and good cyclic life

  9. Hydraulic performance numerical simulation of high specific speed mixed-flow pump based on quasi three-dimensional hydraulic design method

    International Nuclear Information System (INIS)

    Zhang, Y X; Su, M; Hou, H C; Song, P F

    2013-01-01

    This research adopts the quasi three-dimensional hydraulic design method for the impeller of high specific speed mixed-flow pump to achieve the purpose of verifying the hydraulic design method and improving hydraulic performance. Based on the two families of stream surface theory, the direct problem is completed when the meridional flow field of impeller is obtained by employing iterative calculation to settle the continuity and momentum equation of fluid. The inverse problem is completed by using the meridional flow field calculated in the direct problem. After several iterations of the direct and inverse problem, the shape of impeller and flow field information can be obtained finally when the result of iteration satisfies the convergent criteria. Subsequently the internal flow field of the designed pump are simulated by using RANS equations with RNG k-ε two-equation turbulence model. The static pressure and streamline distributions at the symmetrical cross-section, the vector velocity distribution around blades and the reflux phenomenon are analyzed. The numerical results show that the quasi three-dimensional hydraulic design method for high specific speed mixed-flow pump improves the hydraulic performance and reveal main characteristics of the internal flow of mixed-flow pump as well as provide basis for judging the rationality of the hydraulic design, improvement and optimization of hydraulic model

  10. An introduction to vectors, vector operators and vector analysis

    CERN Document Server

    Joag, Pramod S

    2016-01-01

    Ideal for undergraduate and graduate students of science and engineering, this book covers fundamental concepts of vectors and their applications in a single volume. The first unit deals with basic formulation, both conceptual and theoretical. It discusses applications of algebraic operations, Levi-Civita notation, and curvilinear coordinate systems like spherical polar and parabolic systems and structures, and analytical geometry of curves and surfaces. The second unit delves into the algebra of operators and their types and also explains the equivalence between the algebra of vector operators and the algebra of matrices. Formulation of eigen vectors and eigen values of a linear vector operator are elaborated using vector algebra. The third unit deals with vector analysis, discussing vector valued functions of a scalar variable and functions of vector argument (both scalar valued and vector valued), thus covering both the scalar vector fields and vector integration.

  11. Three-Dimensional Numerical Analysis of an Operating Helical Rotor Pump at High Speeds and High Pressures including Cavitation

    Directory of Open Access Journals (Sweden)

    Zhou Yang

    2017-01-01

    Full Text Available High pressures, high speeds, low noise and miniaturization is the direction of development in hydraulic pump. According to the development trend, an operating helical rotor pump (HRP at high speeds and high pressures has been designed and produced, which rotational speed can reach 12000r/min and outlet pressure is as high as 25MPa. Three-dimensional simulation with and without cavitation inside the HRP is completed by the means of the computational fluid dynamics (CFD in this paper, which contributes to understand the complex fluid flow inside it. Moreover, the influences of the rotational speeds of the HRP with and without cavitation has been simulated at 25MPa.

  12. TSAR: a program for automatic resonance assignment using 2D cross-sections of high dimensionality, high-resolution spectra

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor [University of Warsaw, Faculty of Chemistry (Poland); Billeter, Martin, E-mail: martin.billeter@chem.gu.se [University of Gothenburg, Biophysics Group, Department of Chemistry and Molecular Biology (Sweden)

    2012-09-15

    While NMR studies of proteins typically aim at structure, dynamics or interactions, resonance assignments represent in almost all cases the initial step of the analysis. With increasing complexity of the NMR spectra, for example due to decreasing extent of ordered structure, this task often becomes both difficult and time-consuming, and the recording of high-dimensional data with high-resolution may be essential. Random sampling of the evolution time space, combined with sparse multidimensional Fourier transform (SMFT), allows for efficient recording of very high dimensional spectra ({>=}4 dimensions) while maintaining high resolution. However, the nature of this data demands for automation of the assignment process. Here we present the program TSAR (Tool for SMFT-based Assignment of Resonances), which exploits all advantages of SMFT input. Moreover, its flexibility allows to process data from any type of experiments that provide sequential connectivities. The algorithm was tested on several protein samples, including a disordered 81-residue fragment of the {delta} subunit of RNA polymerase from Bacillus subtilis containing various repetitive sequences. For our test examples, TSAR achieves a high percentage of assigned residues without any erroneous assignments.

  13. Instability and damping of one-dimensional high-amplitude Langmuir waves

    International Nuclear Information System (INIS)

    Buchel'nikova, N.S.; Matochkin, E.P.

    1981-01-01

    Numerical experiments (methods ''of particles in cells'') on investigation of instability and damping of one-dimensional Langmuir waves in the region Esub(0)sup(2)/8πnT>m/M>(ksub(0)rsub(d))sup(2) ksub(0) is wave vector, M- ion mass, m-electron mass, v=√T/M, vsub(ph)=Wsub(0)/ksub(0), Wsub(0)-proper plasma frequency) are performed. Numerical experiments have been conducted in a wide range of initial parameters of the wave: E 0 2 /8πnT approximately 4x10 2 -10 -2 , vsub(ph)/vsub(T) approximately 3-160, M/m=10 2 , in some cases M/m=10 3 . It is shown that the basic processes are modulation instability with a modulation length less than the wave length, wave conversion at density inhomogeneity and electron capture by the wave or its harmonics. Depending on initial wave parameters the predominant role is played by this or that process. In the range of linear waves Esub(0)sup(2)/8πnT ksub(0)rsub(d) - to the collapse. In the range of 4x10sup(-2)/(ksub(0)rsub(d)sup(2)>Esub(0)sup(2)/8πnT>10sup(-3)/(ksub(0)rsub(d))sup(2) all the three processes play a comparable role. In the range of strong damping Esub(0)sup(2)/8πnT>4x10sup(-2)/(h ksub(0)rsub(d))sup(2) the main part is played by the wave electron capture resulting in damping considerably exceeding the Lamdau damping [ru

  14. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  15. On-chip generation of high-dimensional entangled quantum states and their coherent control.

    Science.gov (United States)

    Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto

    2017-06-28

    Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.

  16. Toward lattice fractional vector calculus

    Science.gov (United States)

    Tarasov, Vasily E.

    2014-09-01

    An analog of fractional vector calculus for physical lattice models is suggested. We use an approach based on the models of three-dimensional lattices with long-range inter-particle interactions. The lattice analogs of fractional partial derivatives are represented by kernels of lattice long-range interactions, where the Fourier series transformations of these kernels have a power-law form with respect to wave vector components. In the continuum limit, these lattice partial derivatives give derivatives of non-integer order with respect to coordinates. In the three-dimensional description of the non-local continuum, the fractional differential operators have the form of fractional partial derivatives of the Riesz type. As examples of the applications of the suggested lattice fractional vector calculus, we give lattice models with long-range interactions for the fractional Maxwell equations of non-local continuous media and for the fractional generalization of the Mindlin and Aifantis continuum models of gradient elasticity.

  17. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and "hidden" dimensions.

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.

  18. Pure Cs4PbBr6: Highly Luminescent Zero-Dimensional Perovskite Solids

    KAUST Repository

    Saidaminov, Makhsud I.

    2016-09-26

    So-called zero-dimensional perovskites, such as Cs4PbBr6, promise outstanding emissive properties. However, Cs4PbBr6 is mostly prepared by melting of precursors that usually leads to a coformation of undesired phases. Here, we report a simple low-temperature solution-processed synthesis of pure Cs4PbBr6 with remarkable emission properties. We found that pure Cs4PbBr6 in solid form exhibits a 45% photoluminescence quantum yield (PLQY), in contrast to its three-dimensional counterpart, CsPbBr3, which exhibits more than 2 orders of magnitude lower PLQY. Such a PLQY of Cs4PbBr6 is significantly higher than that of other solid forms of lower-dimensional metal halide perovskite derivatives and perovskite nanocrystals. We attribute this dramatic increase in PL to the high exciton binding energy, which we estimate to be ∼353 meV, likely induced by the unique Bergerhoff–Schmitz–Dumont-type crystal structure of Cs4PbBr6, in which metal-halide-comprised octahedra are spatially confined. Our findings bring this class of perovskite derivatives to the forefront of color-converting and light-emitting applications.

  19. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  20. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huttmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.; Bednarczyk, P.

    1992-01-01

    High resolution γ-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig

  1. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  2. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S; Huttmeier, U J; France, G de; Haas, B; Romain, P; Theisen, Ch; Vivien, J P; Zen, J [Centre National de la Recherche Scientifique (CNRS), 67 - Strasbourg (France); Bednarczyk, P [Institute of Nuclear Physics, Cracow (Poland)

    1992-08-01

    High resolution {gamma}-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig.

  3. Highly Efficient Broadband Yellow Phosphor Based on Zero-Dimensional Tin Mixed-Halide Perovskite.

    Science.gov (United States)

    Zhou, Chenkun; Tian, Yu; Yuan, Zhao; Lin, Haoran; Chen, Banghao; Clark, Ronald; Dilbeck, Tristan; Zhou, Yan; Hurley, Joseph; Neu, Jennifer; Besara, Tiglet; Siegrist, Theo; Djurovich, Peter; Ma, Biwu

    2017-12-27

    Organic-inorganic hybrid metal halide perovskites have emerged as a highly promising class of light emitters, which can be used as phosphors for optically pumped white light-emitting diodes (WLEDs). By controlling the structural dimensionality, metal halide perovskites can exhibit tunable narrow and broadband emissions from the free-exciton and self-trapped excited states, respectively. Here, we report a highly efficient broadband yellow light emitter based on zero-dimensional tin mixed-halide perovskite (C 4 N 2 H 14 Br) 4 SnBr x I 6-x (x = 3). This rare-earth-free ionically bonded crystalline material possesses a perfect host-dopant structure, in which the light-emitting metal halide species (SnBr x I 6-x 4- , x = 3) are completely isolated from each other and embedded in the wide band gap organic matrix composed of C 4 N 2 H 14 Br - . The strongly Stokes-shifted broadband yellow emission that peaked at 582 nm from this phosphor, which is a result of excited state structural reorganization, has an extremely large full width at half-maximum of 126 nm and a high photoluminescence quantum efficiency of ∼85% at room temperature. UV-pumped WLEDs fabricated using this yellow emitter together with a commercial europium-doped barium magnesium aluminate blue phosphor (BaMgAl 10 O 17 :Eu 2+ ) can exhibit high color rendering indexes of up to 85.

  4. A high-speed computerized tomography image reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1983-01-01

    The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)

  5. The role of three-dimensional high-definition laparoscopic surgery for gynaecology.

    Science.gov (United States)

    Usta, Taner A; Gundogdu, Elif C

    2015-08-01

    This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.

  6. Collective excitations and superconductivity in reduced dimensional systems - Possible mechanism for high Tc

    International Nuclear Information System (INIS)

    Santoyo, B.M.

    1989-01-01

    The author studies in full detail a possible mechanism of superconductivity in slender electronic systems of finite cross section. This mechanism is based on the pairing interaction mediated by the multiple modes of acoustic plasmons in these structures. First, he shows that multiple non-Landau-damped acoustic plasmon modes exist for electrons in a quasi-one dimensional wire at finite temperatures. These plasmons are of two basic types. The first one is made up by the collective longitudinal oscillations of the electrons essentially of a given transverse energy level oscillating against the electrons in the neighboring transverse energy level. The modes are called Slender Acoustic Plasmons or SAP's. The other mode is the quasi-one dimensional acoustic plasmon mode in which all the electrons oscillate together in phase among themselves but out of phase against the positive ion background. He shows numerically and argues physically that even for a temperature comparable to the mode separation Δω the SAP's and the quasi-one dimensional plasmon persist. Then, based on a clear physical picture, he develops in terms of the dielectric function a theory of superconductivity capable of treating the simultaneous participation of multiple bosonic modes that mediate the pairing interaction. The effect of mode damping is then incorporated in a simple manner that is free of the encumbrance of the strong-coupling, Green's function formalism usually required for the retardation effect. Explicit formulae including such damping are derived for the critical temperature T c and the energy gap Δ 0 . With those modes and armed with such a formalism, he proceeds to investigate a possible superconducting mechanism for high T c in quasi-one dimensional single-wire and multi-wire systems

  7. Signed zeros of Gaussian vector fields - density, correlation functions and curvature

    CERN Document Server

    Foltin, G

    2003-01-01

    We calculate correlation functions of the (signed) density of zeros of Gaussian distributed vector fields. We are able to express correlation functions of arbitrary order through the curvature tensor of a certain abstract Riemann Cartan or Riemannian manifold. As an application, we discuss one- and two-point functions. The zeros of a two-dimensional Gaussian vector field model the distribution of topological defects in the high-temperature phase of two-dimensional systems with orientational degrees of freedom, such as superfluid films, thin superconductors and liquid crystals.

  8. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input

  9. The Figured Worlds of High School Science Teachers: Uncovering Three-Dimensional Assessment Decisions

    Science.gov (United States)

    Ewald, Megan

    As a result of recent mandates of the Next Generation Science Standards, assessments are a "system of meaning" amidst a paradigm shift toward three-dimensional assessments. This study is motivated by two research questions: 1) how do high school science teachers describe their processes of decision-making in the development and use of three-dimensional assessments and 2) how do high school science teachers negotiate their identities as assessors in designing three-dimensional assessments. An important factor in teachers' assessment decision making is how they identify themselves as assessors. Therefore, this study investigated the teachers' roles as assessors through the Sociocultural Identity Theory. The most important contribution from this study is the emergent teacher assessment sub-identities: the modifier-recycler , the feeler-finder, and the creator. Using a qualitative phenomenological research design, focus groups, three-series interviews, think-alouds, and document analysis were utilized in this study. These qualitative methods were chosen to elicit rich conversations among teachers, make meaning of the teachers' experiences through in-depth interviews, amplify the thought processes of individual teachers while making assessment decisions, and analyze assessment documents in relation to teachers' perspectives. The findings from this study suggest that--of the 19 participants--only two teachers could consistently be identified as creators and aligned their assessment practices with NGSS. However, assessment sub-identities are not static and teachers may negotiate their identities from one moment to the next within socially constructed realms of interpretation known as figured worlds. Because teachers are positioned in less powerful figured worlds within the dominant discourse of standardization, this study raises awareness as to how the external pressures from more powerful figured worlds socially construct teachers' identities as assessors. For teachers

  10. Simulating three-dimensional nonthermal high-energy photon emission in colliding-wind binaries

    Energy Technology Data Exchange (ETDEWEB)

    Reitberger, K.; Kissmann, R.; Reimer, A.; Reimer, O., E-mail: klaus.reitberger@uibk.ac.at [Institut für Astro- und Teilchenphysik and Institut für Theoretische Physik, Leopold-Franzens-Universität Innsbruck, A-6020 Innsbruck (Austria)

    2014-07-01

    Massive stars in binary systems have long been regarded as potential sources of high-energy γ rays. The emission is principally thought to arise in the region where the stellar winds collide and accelerate relativistic particles which subsequently emit γ rays. On the basis of a three-dimensional distribution function of high-energy particles in the wind collision region—as obtained by a numerical hydrodynamics and particle transport model—we present the computation of the three-dimensional nonthermal photon emission for a given line of sight. Anisotropic inverse Compton emission is modeled using the target radiation field of both stars. Photons from relativistic bremsstrahlung and neutral pion decay are computed on the basis of local wind plasma densities. We also consider photon-photon opacity effects due to the dense radiation fields of the stars. Results are shown for different stellar separations of a given binary system comprising of a B star and a Wolf-Rayet star. The influence of orbital orientation with respect to the line of sight is also studied by using different orbital viewing angles. For the chosen electron-proton injection ratio of 10{sup –2}, we present the ensuing photon emission in terms of two-dimensional projections maps, spectral energy distributions, and integrated photon flux values in various energy bands. Here, we find a transition from hadron-dominated to lepton-dominated high-energy emission with increasing stellar separations. In addition, we confirm findings from previous analytic modeling that the spectral energy distribution varies significantly with orbital orientation.

  11. High-speed three-dimensional plasma temperature determination of axially symmetric free-burning arcs

    International Nuclear Information System (INIS)

    Bachmann, B; Ekkert, K; Bachmann, J-P; Marques, J-L; Schein, J; Kozakov, R; Gött, G; Schöpp, H; Uhrlandt, D

    2013-01-01

    In this paper we introduce an experimental technique that allows for high-speed, three-dimensional determination of electron density and temperature in axially symmetric free-burning arcs. Optical filters with narrow spectral bands of 487.5–488.5 nm and 689–699 nm are utilized to gain two-dimensional spectral information of a free-burning argon tungsten inert gas arc. A setup of mirrors allows one to image identical arc sections of the two spectral bands onto a single camera chip. Two-different Abel inversion algorithms have been developed to reconstruct the original radial distribution of emission coefficients detected with each spectral window and to confirm the results. With the assumption of local thermodynamic equilibrium we calculate emission coefficients as a function of temperature by application of the Saha equation, the ideal gas law, the quasineutral gas condition and the NIST compilation of spectral lines. Ratios of calculated emission coefficients are compared with measured ones yielding local plasma temperatures. In the case of axial symmetry the three-dimensional plasma temperature distributions have been determined at dc currents of 100, 125, 150 and 200 A yielding temperatures up to 20000 K in the hot cathode region. These measurements have been validated by four different techniques utilizing a high-resolution spectrometer at different positions in the plasma. Plasma temperatures show good agreement throughout the different methods. Additionally spatially resolved transient plasma temperatures have been measured of a dc pulsed process employing a high-speed frame rate of 33000 frames per second showing the modulation of the arc isothermals with time and providing information about the sensitivity of the experimental approach. (paper)

  12. Graphene materials as 2D non-viral gene transfer vector platforms.

    Science.gov (United States)

    Vincent, M; de Lázaro, I; Kostarelos, K

    2017-03-01

    Advances in genomics and gene therapy could offer solutions to many diseases that remain incurable today, however, one of the critical reasons halting clinical progress is due to the difficulty in designing efficient and safe delivery vectors for the appropriate genetic cargo. Safety and large-scale production concerns counter-balance the high gene transfer efficiency achieved with viral vectors, while non-viral strategies have yet to become sufficiently efficient. The extraordinary physicochemical, optical and photothermal properties of graphene-based materials (GBMs) could offer two-dimensional components for the design of nucleic acid carrier systems. We discuss here such properties and their implications for the optimization of gene delivery. While the design of such vectors is still in its infancy, we provide here an exhaustive and up-to-date analysis of the studies that have explored GBMs as gene transfer vectors, focusing on the functionalization strategies followed to improve vector performance and on the biological effects attained.

  13. THREE-DIMENSIONAL OBSERVATIONS ON THICK BIOLOGICAL SPECIMENS BY HIGH VOLTAGE ELECTRON MICROSCOPY

    Directory of Open Access Journals (Sweden)

    Tetsuji Nagata

    2011-05-01

    Full Text Available Thick biological specimens prepared as whole mount cultured cells or thick sections from embedded tissues were stained with histochemical reactions, such as thiamine pyrophosphatase, glucose-6-phosphatase, cytochrome oxidase, acid phosphatase, DAB reactions and radioautography, to observe 3-D ultrastructures of cell organelles producing stereo-pairs by high voltage electron microscopy at accerelating voltages of 400-1000 kV. The organelles demonstrated were Golgi apparatus, endoplasmic reticulum, mitochondria, lysosomes, peroxisomes, pinocytotic vesicles and incorporations of radioactive compounds. As the results, those cell organelles were observed 3- dimensionally and the relative relationships between these organelles were demonstrated.

  14. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    Science.gov (United States)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  15. The high exponent limit $p \\to \\infty$ for the one-dimensional nonlinear wave equation

    OpenAIRE

    Tao, Terence

    2009-01-01

    We investigate the behaviour of solutions $\\phi = \\phi^{(p)}$ to the one-dimensional nonlinear wave equation $-\\phi_{tt} + \\phi_{xx} = -|\\phi|^{p-1} \\phi$ with initial data $\\phi(0,x) = \\phi_0(x)$, $\\phi_t(0,x) = \\phi_1(x)$, in the high exponent limit $p \\to \\infty$ (holding $\\phi_0, \\phi_1$ fixed). We show that if the initial data $\\phi_0, \\phi_1$ are smooth with $\\phi_0$ taking values in $(-1,1)$ and obey a mild non-degeneracy condition, then $\\phi$ converges locally uniformly to a piecewis...

  16. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    International Nuclear Information System (INIS)

    Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang

    2016-01-01

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  17. Quasi-two-dimensional metallic hydrogen in diphosphide at a high pressure

    International Nuclear Information System (INIS)

    Degtyarenko, N. N.; Mazur, E. A.

    2016-01-01

    The structural, electronic, phonon, and other characteristics of the normal phases of phosphorus hydrides with stoichiometry PH k are analyzed. The properties of the initial substance, namely, diphosphine are calculated. In contrast to phosphorus hydrides with stoichiometry PH 3 , a quasi-two-dimensional phosphorus-stabilized lattice of metallic hydrogen can be formed in this substance during hydrostatic compression at a high pressure. The formed structure with H–P–H elements is shown to be locally stable in phonon spectrum, i.e., to be metastable. The properties of diphosphine are compared with the properties of similar structures of sulfur hydrides.

  18. Two-dimensional gold nanostructures with high activity for selective oxidation of carbon–hydrogen bonds

    KAUST Repository

    Wang, Liang

    2015-04-22

    Efficient synthesis of stable two-dimensional (2D) noble metal catalysts is a challenging topic. Here we report the facile synthesis of 2D gold nanosheets via a wet chemistry method, by using layered double hydroxide as the template. Detailed characterization with electron microscopy and X-ray photoelectron spectroscopy demonstrates that the nanosheets are negatively charged and [001] oriented with thicknesses varying from single to a few atomic layers. X-ray absorption spectroscopy reveals unusually low gold–gold coordination numbers. These gold nanosheets exhibit high catalytic activity and stability in the solvent-free selective oxidation of carbon–hydrogen bonds with molecular oxygen.

  19. Electric Field Guided Assembly of One-Dimensional Nanostructures for High Performance Sensors

    Directory of Open Access Journals (Sweden)

    Wing Kam Liu

    2012-05-01

    Full Text Available Various nanowire or nanotube-based devices have been demonstrated to fulfill the anticipated future demands on sensors. To fabricate such devices, electric field-based methods have demonstrated a great potential to integrate one-dimensional nanostructures into various forms. This review paper discusses theoretical and experimental aspects of the working principles, the assembled structures, and the unique functions associated with electric field-based assembly. The challenges and opportunities of the assembly methods are addressed in conjunction with future directions toward high performance sensors.

  20. High-dimensional chaos from self-sustained collisions of solitons

    Energy Technology Data Exchange (ETDEWEB)

    Yildirim, O. Ozgur, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Cavium, Inc., 600 Nickerson Rd., Marlborough, Massachusetts 01752 (United States); Ham, Donhee, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Harvard University, 33 Oxford St., Cambridge, Massachusetts 02138 (United States)

    2014-06-16

    We experimentally demonstrate chaos generation based on collisions of electrical solitons on a nonlinear transmission line. The nonlinear line creates solitons, and an amplifier connected to it provides gain to these solitons for their self-excitation and self-sustenance. Critically, the amplifier also provides a mechanism to enable and intensify collisions among solitons. These collisional interactions are of intrinsically nonlinear nature, modulating the phase and amplitude of solitons, thus causing chaos. This chaos generated by the exploitation of the nonlinear wave phenomena is inherently high-dimensional, which we also demonstrate.

  1. Inferring biological tasks using Pareto analysis of high-dimensional data.

    Science.gov (United States)

    Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri

    2015-03-01

    We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.

  2. A novel algorithm of artificial immune system for high-dimensional function numerical optimization

    Institute of Scientific and Technical Information of China (English)

    DU Haifeng; GONG Maoguo; JIAO Licheng; LIU Ruochen

    2005-01-01

    Based on the clonal selection theory and immune memory theory, a novel artificial immune system algorithm, immune memory clonal programming algorithm (IMCPA), is put forward. Using the theorem of Markov chain, it is proved that IMCPA is convergent. Compared with some other evolutionary programming algorithms (like Breeder genetic algorithm), IMCPA is shown to be an evolutionary strategy capable of solving complex machine learning tasks, like high-dimensional function optimization, which maintains the diversity of the population and avoids prematurity to some extent, and has a higher convergence speed.

  3. Three-dimensional propagation and absorption of high frequency Gaussian beams in magnetoactive plasmas

    International Nuclear Information System (INIS)

    Nowak, S.; Orefice, A.

    1994-01-01

    In today's high frequency systems employed for plasma diagnostics, power heating, and current drive the behavior of the wave beams is appreciably affected by the self-diffraction phenomena due to their narrow collimation. In the present article the three-dimensional propagation of Gaussian beams in inhomogeneous and anisotropic media is analyzed, starting from a properly formulated dispersion relation. Particular attention is paid, in the case of electromagnetic electron cyclotron (EC) waves, to the toroidal geometry characterizing tokamak plasmas, to the power density evolution on the advancing wave fronts, and to the absorption features occurring when a beam crosses an EC resonant layer

  4. Computing and visualizing time-varying merge trees for high-dimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  5. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Hoang, Ngoc-Tram D. [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Nguyen, Duy-Anh P. [Department of Natural Science, Thu Dau Mot University, 6, Tran Van On Street, Thu Dau Mot City, Binh Duong Province (Viet Nam); Hoang, Van-Hung [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Le, Van-Hoang, E-mail: levanhoang@tdt.edu.vn [Atomic Molecular and Optical Physics Research Group, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Faculty of Applied Sciences, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam)

    2016-08-15

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  6. Two-dimensional gold nanostructures with high activity for selective oxidation of carbon-hydrogen bonds

    Science.gov (United States)

    Wang, Liang; Zhu, Yihan; Wang, Jian-Qiang; Liu, Fudong; Huang, Jianfeng; Meng, Xiangju; Basset, Jean-Marie; Han, Yu; Xiao, Feng-Shou

    2015-04-01

    Efficient synthesis of stable two-dimensional (2D) noble metal catalysts is a challenging topic. Here we report the facile synthesis of 2D gold nanosheets via a wet chemistry method, by using layered double hydroxide as the template. Detailed characterization with electron microscopy and X-ray photoelectron spectroscopy demonstrates that the nanosheets are negatively charged and [001] oriented with thicknesses varying from single to a few atomic layers. X-ray absorption spectroscopy reveals unusually low gold-gold coordination numbers. These gold nanosheets exhibit high catalytic activity and stability in the solvent-free selective oxidation of carbon-hydrogen bonds with molecular oxygen.

  7. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    Science.gov (United States)

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  8. Quasi-two-dimensional metallic hydrogen in diphosphide at a high pressure

    Energy Technology Data Exchange (ETDEWEB)

    Degtyarenko, N. N.; Mazur, E. A., E-mail: eugen-mazur@mail.ru [National Research Nuclear University MEPhI (Russian Federation)

    2016-08-15

    The structural, electronic, phonon, and other characteristics of the normal phases of phosphorus hydrides with stoichiometry PH{sub k} are analyzed. The properties of the initial substance, namely, diphosphine are calculated. In contrast to phosphorus hydrides with stoichiometry PH{sub 3}, a quasi-two-dimensional phosphorus-stabilized lattice of metallic hydrogen can be formed in this substance during hydrostatic compression at a high pressure. The formed structure with H–P–H elements is shown to be locally stable in phonon spectrum, i.e., to be metastable. The properties of diphosphine are compared with the properties of similar structures of sulfur hydrides.

  9. Optical electromagnetic vector-field modeling for the accurate analysis of finite diffractive structures of high complexity

    DEFF Research Database (Denmark)

    Dridi, Kim; Bjarklev, Anders Overgaard

    1999-01-01

    An electromagnetic vector-field modle for design of optical components based on the finite-difference-time-domain method and radiation integrals in presented. Its ability to predict the optical electromagnetic dynamics in structures with complex material distribution is demonstrated. Theoretical...

  10. Three-dimensional interconnected porous graphitic carbon derived from rice straw for high performance supercapacitors

    Science.gov (United States)

    Jin, Hong; Hu, Jingpeng; Wu, Shichao; Wang, Xiaolan; Zhang, Hui; Xu, Hui; Lian, Kun

    2018-04-01

    Three-dimensional interconnected porous graphitic carbon materials are synthesized via a combination of graphitization and activation process with rice straw as the carbon source. The physicochemical properties of the three-dimensional interconnected porous graphitic carbon materials are characterized by Nitrogen adsorption/desorption, Fourier-transform infrared spectroscopy, X-ray diffraction, Raman spectroscopy, Scanning electron microscopy and Transmission electron microscopy. The results demonstrate that the as-prepared carbon is a high surface area carbon material (a specific surface area of 3333 m2 g-1 with abundant mesoporous and microporous structures). And it exhibits superb performance in symmetric double layer capacitors with a high specific capacitance of 400 F g-1 at a current density of 0.1 A g-1, good rate performance with 312 F g-1 under a current density of 5 A g-1 and favorable cycle stability with 6.4% loss after 10000 cycles at a current density of 5 A g-1 in the aqueous electrolyte of 6M KOH. Thus, rice straw is a promising carbon source for fabricating inexpensive, sustainable and high performance supercapacitors' electrode materials.

  11. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2017-04-25

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  12. Assessing the detectability of antioxidants in two-dimensional high-performance liquid chromatography.

    Science.gov (United States)

    Bassanese, Danielle N; Conlan, Xavier A; Barnett, Neil W; Stevenson, Paul G

    2015-05-01

    This paper explores the analytical figures of merit of two-dimensional high-performance liquid chromatography for the separation of antioxidant standards. The cumulative two-dimensional high-performance liquid chromatography peak area was calculated for 11 antioxidants by two different methods--the areas reported by the control software and by fitting the data with a Gaussian model; these methods were evaluated for precision and sensitivity. Both methods demonstrated excellent precision in regards to retention time in the second dimension (%RSD below 1.16%) and cumulative second dimension peak area (%RSD below 3.73% from the instrument software and 5.87% for the Gaussian method). Combining areas reported by the high-performance liquid chromatographic control software displayed superior limits of detection, in the order of 1 × 10(-6) M, almost an order of magnitude lower than the Gaussian method for some analytes. The introduction of the countergradient eliminated the strong solvent mismatch between dimensions, leading to a much improved peak shape and better detection limits for quantification. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems

    Directory of Open Access Journals (Sweden)

    DimitrisG. Stavrakoudis

    2012-04-01

    Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.

  14. Stable high efficiency two-dimensional perovskite solar cells via cesium doping

    KAUST Repository

    Zhang, Xu

    2017-08-15

    Two-dimensional (2D) organic-inorganic perovskites have recently emerged as one of the most important thin-film solar cell materials owing to their excellent environmental stability. The remaining major pitfall is their relatively poor photovoltaic performance in contrast to 3D perovskites. In this work we demonstrate cesium cation (Cs) doped 2D (BA)(MA)PbI perovskite solar cells giving a power conversion efficiency (PCE) as high as 13.7%, the highest among the reported 2D devices, with excellent humidity resistance. The enhanced efficiency from 12.3% (without Cs) to 13.7% (with 5% Cs) is attributed to perfectly controlled crystal orientation, an increased grain size of the 2D planes, superior surface quality, reduced trap-state density, enhanced charge-carrier mobility and charge-transfer kinetics. Surprisingly, it is found that the Cs doping yields superior stability for the 2D perovskite solar cells when subjected to a high humidity environment without encapsulation. The device doped using 5% Cs degrades only ca. 10% after 1400 hours of exposure in 30% relative humidity (RH), and exhibits significantly improved stability under heating and high moisture environments. Our results provide an important step toward air-stable and fully printable low dimensional perovskites as a next-generation renewable energy source.

  15. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    International Nuclear Information System (INIS)

    Wang, Yang; Bao, Wan-Su; Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei

    2017-01-01

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  16. Latent class models for joint analysis of disease prevalence and high-dimensional semicontinuous biomarker data.

    Science.gov (United States)

    Zhang, Bo; Chen, Zhen; Albert, Paul S

    2012-01-01

    High-dimensional biomarker data are often collected in epidemiological studies when assessing the association between biomarkers and human disease is of interest. We develop a latent class modeling approach for joint analysis of high-dimensional semicontinuous biomarker data and a binary disease outcome. To model the relationship between complex biomarker expression patterns and disease risk, we use latent risk classes to link the 2 modeling components. We characterize complex biomarker-specific differences through biomarker-specific random effects, so that different biomarkers can have different baseline (low-risk) values as well as different between-class differences. The proposed approach also accommodates data features that are common in environmental toxicology and other biomarker exposure data, including a large number of biomarkers, numerous zero values, and complex mean-variance relationship in the biomarkers levels. A Monte Carlo EM (MCEM) algorithm is proposed for parameter estimation. Both the MCEM algorithm and model selection procedures are shown to work well in simulations and applications. In applying the proposed approach to an epidemiological study that examined the relationship between environmental polychlorinated biphenyl (PCB) exposure and the risk of endometriosis, we identified a highly significant overall effect of PCB concentrations on the risk of endometriosis.

  17. Three-dimensional laparoscopy vs 2-dimensional laparoscopy with high-definition technology for abdominal surgery: a systematic review.

    Science.gov (United States)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian; Kildebro, Niels; Rosenberg, Jacob

    2017-01-01

    This systematic review investigates newer generation 3-dimensional (3D) laparoscopy vs 2-dimensional (2D) laparoscopy in terms of error rating, performance time, and subjective assessment as early comparisons have shown contradictory results due to technological shortcomings. This systematic review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Randomized controlled trials (RCTs) comparing newer generation 3D-laparoscopy with 2D-laparoscopy were included through searches in Pubmed, EMBASE, and Cochrane Central Register of Controlled Trials database. Of 643 articles, 13 RCTs were included, of which 2 were clinical trials. Nine of 13 trials (69%) and 10 of 13 trials (77%) found a significant reduction in performance time and error, respectively, with the use of 3D-laparoscopy. Overall, 3D-laparoscopy was found to be superior or equal to 2D-laparoscopy. All trials featuring subjective evaluation found a superiority of 3D-laparoscopy. More clinical RCTs are still awaited for the convincing results to be reproduced. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Page segmentation using script identification vectors: A first look

    Energy Technology Data Exchange (ETDEWEB)

    Hochberg, J.; Cannon, M.; Kelly, P.; White, J.

    1997-07-01

    Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green, and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.

  19. Estimating chlorophyll with thermal and broadband multispectral high resolution imagery from an unmanned aerial system using relevance vector machines for precision agriculture

    Science.gov (United States)

    Elarab, Manal; Ticlavilca, Andres M.; Torres-Rua, Alfonso F.; Maslova, Inga; McKee, Mac

    2015-12-01

    Precision agriculture requires high-resolution information to enable greater precision in the management of inputs to production. Actionable information about crop and field status must be acquired at high spatial resolution and at a temporal frequency appropriate for timely responses. In this study, high spatial resolution imagery was obtained through the use of a small, unmanned aerial system called AggieAirTM. Simultaneously with the AggieAir flights, intensive ground sampling for plant chlorophyll was conducted at precisely determined locations. This study reports the application of a relevance vector machine coupled with cross validation and backward elimination to a dataset composed of reflectance from high-resolution multi-spectral imagery (VIS-NIR), thermal infrared imagery, and vegetative indices, in conjunction with in situ SPAD measurements from which chlorophyll concentrations were derived, to estimate chlorophyll concentration from remotely sensed data at 15-cm resolution. The results indicate that a relevance vector machine with a thin plate spline kernel type and kernel width of 5.4, having LAI, NDVI, thermal and red bands as the selected set of inputs, can be used to spatially estimate chlorophyll concentration with a root-mean-squared-error of 5.31 μg cm-2, efficiency of 0.76, and 9 relevance vectors.

  20. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J. [Strasbourg-1 Univ., 67 (France). Centre de Recherches Nucleaires

    1992-12-31

    A new generation of high resolution {gamma}-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold {gamma}-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs.

  1. Three-dimensional bicontinuous nanoporous Au/polyaniline hybrid films for high-performance electrochemical supercapacitors

    Science.gov (United States)

    Lang, Xingyou; Zhang, Ling; Fujita, Takeshi; Ding, Yi; Chen, Mingwei

    2012-01-01

    We report three-dimensional bicontinuous nanoporous Au/polyaniline (PANI) composite films made by one-step electrochemical polymerization of PANI shell onto dealloyed nanoporous gold (NPG) skeletons for the applications in electrochemical supercapacitors. The NPG/PANI based supercapacitors exhibit ultrahigh volumetric capacitance (∼1500 F cm-3) and energy density (∼0.078 Wh cm-3), which are seven and four orders of magnitude higher than these of electrolytic capacitors, with the same power density up to ∼190 W cm-3. The outstanding capacitive performances result from a novel nanoarchitecture in which pseudocapacitive PANI shells are incorporated into pore channels of highly conductive NPG, making them promising candidates as electrode materials in supercapacitor devices combing high-energy storage densities with high-power delivery.

  2. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.

    1992-01-01

    A new generation of high resolution γ-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold γ-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs

  3. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    International Nuclear Information System (INIS)

    Liu Jizhi; Chen Xingbi

    2009-01-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  4. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)

    2009-12-15

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  5. High-efficiency one-dimensional atom localization via two parallel standing-wave fields

    International Nuclear Information System (INIS)

    Wang, Zhiping; Wu, Xuqiang; Lu, Liang; Yu, Benli

    2014-01-01

    We present a new scheme of high-efficiency one-dimensional (1D) atom localization via measurement of upper state population or the probe absorption in a four-level N-type atomic system. By applying two classical standing-wave fields, the localization peak position and number, as well as the conditional position probability, can be easily controlled by the system parameters, and the sub-half-wavelength atom localization is also observed. More importantly, there is 100% detecting probability of the atom in the subwavelength domain when the corresponding conditions are satisfied. The proposed scheme may open up a promising way to achieve high-precision and high-efficiency 1D atom localization. (paper)

  6. High-resolution and high-throughput multichannel Fourier transform spectrometer with two-dimensional interferogram warping compensation

    Science.gov (United States)

    Watanabe, A.; Furukawa, H.

    2018-04-01

    The resolution of multichannel Fourier transform (McFT) spectroscopy is insufficient for many applications despite its extreme advantage of high throughput. We propose an improved configuration to realise both performance using a two-dimensional area sensor. For the spectral resolution, we obtained the interferogram of a larger optical path difference by shifting the area sensor without altering any optical components. The non-linear phase error of the interferometer was successfully corrected using a phase-compensation calculation. Warping compensation was also applied to realise a higher throughput to accumulate the signal between vertical pixels. Our approach significantly improved the resolution and signal-to-noise ratio by factors of 1.7 and 34, respectively. This high-resolution and high-sensitivity McFT spectrometer will be useful for detecting weak light signals such as those in non-invasive diagnosis.

  7. Music Signal Processing Using Vector Product Neural Networks

    Science.gov (United States)

    Fan, Z. C.; Chan, T. S.; Yang, Y. H.; Jang, J. S. R.

    2017-05-01

    We propose a novel neural network model for music signal processing using vector product neurons and dimensionality transformations. Here, the inputs are first mapped from real values into three-dimensional vectors then fed into a three-dimensional vector product neural network where the inputs, outputs, and weights are all three-dimensional values. Next, the final outputs are mapped back to the reals. Two methods for dimensionality transformation are proposed, one via context windows and the other via spectral coloring. Experimental results on the iKala dataset for blind singing voice separation confirm the efficacy of our model.

  8. AucPR: An AUC-based approach using penalized regression for disease prediction with high-dimensional omics data

    OpenAIRE

    Yu, Wenbao; Park, Taesung

    2014-01-01

    Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...

  9. Video Vectorization via Tetrahedral Remeshing.

    Science.gov (United States)

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  10. High-dimensional free-space optical communications based on orbital angular momentum coding

    Science.gov (United States)

    Zou, Li; Gu, Xiaofan; Wang, Le

    2018-03-01

    In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.

  11. On the sensitivity of dimensional stability of high density polyethylene on heating rate

    Directory of Open Access Journals (Sweden)

    2007-02-01

    Full Text Available Although high density polyethylene (HDPE is one of the most widely used industrial polymers, its application compared to its potential has been limited because of its low dimensional stability particularly at high temperature. Dilatometry test is considered as a method for examining thermal dimensional stability (TDS of the material. In spite of the importance of simulation of TDS of HDPE during dilatometry test it has not been paid attention by other investigators. Thus the main goal of this research is concentrated on simulation of TDS of HDPE. Also it has been tried to validate the simulation results and practical experiments. For this purpose the standard dilatometry test was done on the HDPE speci­mens. Secant coefficient of linear thermal expansion was computed from the test. Then by considering boundary conditions and material properties, dilatometry test has been simulated at different heating rates and the thermal strain versus temper­ature was calculated. The results showed that the simulation results and practical experiments were very close together.

  12. Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate

    Directory of Open Access Journals (Sweden)

    Seokhoon Kim

    2015-01-01

    Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.

  13. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  14. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  15. Quantum secret sharing based on modulated high-dimensional time-bin entanglement

    International Nuclear Information System (INIS)

    Takesue, Hiroki; Inoue, Kyo

    2006-01-01

    We propose a scheme for quantum secret sharing (QSS) that uses a modulated high-dimensional time-bin entanglement. By modulating the relative phase randomly by {0,π}, a sender with the entanglement source can randomly change the sign of the correlation of the measurement outcomes obtained by two distant recipients. The two recipients must cooperate if they are to obtain the sign of the correlation, which is used as a secret key. We show that our scheme is secure against intercept-and-resend (IR) and beam splitting attacks by an outside eavesdropper thanks to the nonorthogonality of high-dimensional time-bin entangled states. We also show that a cheating attempt based on an IR attack by one of the recipients can be detected by changing the dimension of the time-bin entanglement randomly and inserting two 'vacant' slots between the packets. Then, cheating attempts can be detected by monitoring the count rate in the vacant slots. The proposed scheme has better experimental feasibility than previously proposed entanglement-based QSS schemes

  16. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    Science.gov (United States)

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  17. Growing three-dimensional biomorphic graphene powders using naturally abundant diatomite templates towards high solution processability.

    Science.gov (United States)

    Chen, Ke; Li, Cong; Shi, Liurong; Gao, Teng; Song, Xiuju; Bachmatiuk, Alicja; Zou, Zhiyu; Deng, Bing; Ji, Qingqing; Ma, Donglin; Peng, Hailin; Du, Zuliang; Rümmeli, Mark Hermann; Zhang, Yanfeng; Liu, Zhongfan

    2016-11-07

    Mass production of high-quality graphene with low cost is the footstone for its widespread practical applications. We present herein a self-limited growth approach for producing graphene powders by a small-methane-flow chemical vapour deposition process on naturally abundant and industrially widely used diatomite (biosilica) substrates. Distinct from the chemically exfoliated graphene, thus-produced biomorphic graphene is highly crystallized with atomic layer-thickness controllability, structural designability and less noncarbon impurities. In particular, the individual graphene microarchitectures preserve a three-dimensional naturally curved surface morphology of original diatom frustules, effectively overcoming the interlayer stacking and hence giving excellent dispersion performance in fabricating solution-processible electrodes. The graphene films derived from as-made graphene powders, compatible with either rod-coating, or inkjet and roll-to-roll printing techniques, exhibit much higher electrical conductivity (∼110,700 S m -1 at 80% transmittance) than previously reported solution-based counterparts. This work thus puts forward a practical route for low-cost mass production of various powdery two-dimensional materials.

  18. TESTING HIGH-DIMENSIONAL COVARIANCE MATRICES, WITH APPLICATION TO DETECTING SCHIZOPHRENIA RISK GENES.

    Science.gov (United States)

    Zhu, Lingxue; Lei, Jing; Devlin, Bernie; Roeder, Kathryn

    2017-09-01

    Scientists routinely compare gene expression levels in cases versus controls in part to determine genes associated with a disease. Similarly, detecting case-control differences in co-expression among genes can be critical to understanding complex human diseases; however statistical methods have been limited by the high dimensional nature of this problem. In this paper, we construct a sparse-Leading-Eigenvalue-Driven (sLED) test for comparing two high-dimensional covariance matrices. By focusing on the spectrum of the differential matrix, sLED provides a novel perspective that accommodates what we assume to be common, namely sparse and weak signals in gene expression data, and it is closely related with Sparse Principal Component Analysis. We prove that sLED achieves full power asymptotically under mild assumptions, and simulation studies verify that it outperforms other existing procedures under many biologically plausible scenarios. Applying sLED to the largest gene-expression dataset obtained from post-mortem brain tissue from Schizophrenia patients and controls, we provide a novel list of genes implicated in Schizophrenia and reveal intriguing patterns in gene co-expression change for Schizophrenia subjects. We also illustrate that sLED can be generalized to compare other gene-gene "relationship" matrices that are of practical interest, such as the weighted adjacency matrices.

  19. Growing three-dimensional biomorphic graphene powders using naturally abundant diatomite templates towards high solution processability

    Science.gov (United States)

    Chen, Ke; Li, Cong; Shi, Liurong; Gao, Teng; Song, Xiuju; Bachmatiuk, Alicja; Zou, Zhiyu; Deng, Bing; Ji, Qingqing; Ma, Donglin; Peng, Hailin; Du, Zuliang; Rümmeli, Mark Hermann; Zhang, Yanfeng; Liu, Zhongfan

    2016-11-01

    Mass production of high-quality graphene with low cost is the footstone for its widespread practical applications. We present herein a self-limited growth approach for producing graphene powders by a small-methane-flow chemical vapour deposition process on naturally abundant and industrially widely used diatomite (biosilica) substrates. Distinct from the chemically exfoliated graphene, thus-produced biomorphic graphene is highly crystallized with atomic layer-thickness controllability, structural designability and less noncarbon impurities. In particular, the individual graphene microarchitectures preserve a three-dimensional naturally curved surface morphology of original diatom frustules, effectively overcoming the interlayer stacking and hence giving excellent dispersion performance in fabricating solution-processible electrodes. The graphene films derived from as-made graphene powders, compatible with either rod-coating, or inkjet and roll-to-roll printing techniques, exhibit much higher electrical conductivity (~110,700 S m-1 at 80% transmittance) than previously reported solution-based counterparts. This work thus puts forward a practical route for low-cost mass production of various powdery two-dimensional materials.

  20. Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations

    Science.gov (United States)

    Garrett, Karen A.; Allison, David B.

    2015-01-01

    Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106