WorldWideScience

Sample records for high dimensional problems

  1. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  2. Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids

    International Nuclear Information System (INIS)

    Jakeman, John D.; Archibald, Richard; Xiu Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  3. Mining High-Dimensional Data

    Science.gov (United States)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  4. Clustering high dimensional data

    DEFF Research Database (Denmark)

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  5. One-dimensional Gromov minimal filling problem

    International Nuclear Information System (INIS)

    Ivanov, Alexandr O; Tuzhilin, Alexey A

    2012-01-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  6. Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian

    2011-01-01

    The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...

  7. Highly conducting one-dimensional solids

    CERN Document Server

    Evrard, Roger; Doren, Victor

    1979-01-01

    Although the problem of a metal in one dimension has long been known to solid-state physicists, it was not until the synthesis of real one-dimensional or quasi-one-dimensional systems that this subject began to attract considerable attention. This has been due in part to the search for high­ temperature superconductivity and the possibility of reaching this goal with quasi-one-dimensional substances. A period of intense activity began in 1973 with the report of a measurement of an apparently divergent conduc­ tivity peak in TfF-TCNQ. Since then a great deal has been learned about quasi-one-dimensional conductors. The emphasis now has shifted from trying to find materials of very high conductivity to the many interesting problems of physics and chemistry involved. But many questions remain open and are still under active investigation. This book gives a review of the experimental as well as theoretical progress made in this field over the last years. All the chapters have been written by scientists who have ...

  8. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  9. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    Science.gov (United States)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  10. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  11. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  12. A HIGH ORDER SOLUTION OF THREE DIMENSIONAL TIME DEPENDENT NONLINEAR CONVECTIVE-DIFFUSIVE PROBLEM USING MODIFIED VARIATIONAL ITERATION METHOD

    Directory of Open Access Journals (Sweden)

    Pratibha Joshi

    2014-12-01

    Full Text Available In this paper, we have achieved high order solution of a three dimensional nonlinear diffusive-convective problem using modified variational iteration method. The efficiency of this approach has been shown by solving two examples. All computational work has been performed in MATHEMATICA.

  13. High-dimensional change-point estimation: Combining filtering with convex optimization

    OpenAIRE

    Soh, Yong Sheng; Chandrasekaran, Venkat

    2017-01-01

    We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...

  14. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems.

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    Full Text Available Harmony Search (HS and Teaching-Learning-Based Optimization (TLBO as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.

  15. Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems

    International Nuclear Information System (INIS)

    Helin, T; Burger, M

    2015-01-01

    A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)

  16. Greedy algorithms for high-dimensional non-symmetric linear problems***

    Directory of Open Access Journals (Sweden)

    Cancès E.

    2013-12-01

    Full Text Available In this article, we present a family of numerical approaches to solve high-dimensional linear non-symmetric problems. The principle of these methods is to approximate a function which depends on a large number of variates by a sum of tensor product functions, each term of which is iteratively computed via a greedy algorithm ? . There exists a good theoretical framework for these methods in the case of (linear and nonlinear symmetric elliptic problems. However, the convergence results are not valid any more as soon as the problems under consideration are not symmetric. We present here a review of the main algorithms proposed in the literature to circumvent this difficulty, together with some new approaches. The theoretical convergence results and the practical implementation of these algorithms are discussed. Their behaviors are illustrated through some numerical examples. Dans cet article, nous présentons une famille de méthodes numériques pour résoudre des problèmes linéaires non symétriques en grande dimension. Le principe de ces approches est de représenter une fonction dépendant d’un grand nombre de variables sous la forme d’une somme de fonctions produit tensoriel, dont chaque terme est calculé itérativement via un algorithme glouton ? . Ces méthodes possèdent de bonnes propriétés théoriques dans le cas de problèmes elliptiques symétriques (linéaires ou non linéaires, mais celles-ci ne sont plus valables dès lors que les problèmes considérés ne sont plus symétriques. Nous présentons une revue des principaux algorithmes proposés dans la littérature pour contourner cette difficulté ainsi que de nouvelles approches que nous proposons. Les résultats de convergence théoriques et la mise en oeuvre pratique de ces algorithmes sont détaillés et leur comportement est illustré au travers d’exemples numériques.

  17. Dimensional reduction of a generalized flux problem

    International Nuclear Information System (INIS)

    Moroz, A.

    1992-01-01

    In this paper, a generalized flux problem with Abelian and non-Abelian fluxes is considered. In the Abelian case we shall show that the generalized flux problem for tight-binding models of noninteracting electrons on either 2n- or (2n + 1)-dimensional lattice can always be reduced to an n-dimensional hopping problem. A residual freedom in this reduction enables one to identify equivalence classes of hopping Hamiltonians which have the same spectrum. In the non-Abelian case, the reduction is not possible in general unless the flux tensor factorizes into an Abelian one times are element of the corresponding algebra

  18. Toward precise solution of one-dimensional velocity inverse problems

    International Nuclear Information System (INIS)

    Gray, S.; Hagin, F.

    1980-01-01

    A family of one-dimensional inverse problems are considered with the goal of reconstructing velocity profiles to reasonably high accuracy. The travel-time variable change is used together with an iteration scheme to produce an effective algorithm for computation. Under modest assumptions the scheme is shown to be convergent

  19. Effects of dependence in high-dimensional multiple testing problems

    Directory of Open Access Journals (Sweden)

    van de Wiel Mark A

    2008-02-01

    Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.

  20. A high-order integral solver for scalar problems of diffraction by screens and apertures in three-dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, Oscar P., E-mail: obruno@caltech.edu; Lintner, Stéphane K.

    2013-11-01

    We present a novel methodology for the numerical solution of problems of diffraction by infinitely thin screens in three-dimensional space. Our approach relies on new integral formulations as well as associated high-order quadrature rules. The new integral formulations involve weighted versions of the classical integral operators related to the thin-screen Dirichlet and Neumann problems as well as a generalization to the open-surface problem of the classical Calderón formulae. The high-order quadrature rules we introduce for these operators, in turn, resolve the multiple Green function and edge singularities (which occur at arbitrarily close distances from each other, and which include weakly singular as well as hypersingular kernels) and thus give rise to super-algebraically fast convergence as the discretization sizes are increased. When used in conjunction with Krylov-subspace linear algebra solvers such as GMRES, the resulting solvers produce results of high accuracy in small numbers of iterations for low and high frequencies alike. We demonstrate our methodology with a variety of numerical results for screen and aperture problems at high frequencies—including simulation of classical experiments such as the diffraction by a circular disc (featuring in particular the famous Poisson spot), evaluation of interference fringes resulting from diffraction across two nearby circular apertures, as well as solution of problems of scattering by more complex geometries consisting of multiple scatterers and cavities.

  1. High dimensional neurocomputing growth, appraisal and applications

    CERN Document Server

    Tripathi, Bipin Kumar

    2015-01-01

    The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...

  2. Probabilistic numerical methods for high-dimensional stochastic control and valuation problems on electricity markets

    International Nuclear Information System (INIS)

    Langrene, Nicolas

    2014-01-01

    This thesis deals with the numerical solution of general stochastic control problems, with notable applications for electricity markets. We first propose a structural model for the price of electricity, allowing for price spikes well above the marginal fuel price under strained market conditions. This model allows to price and partially hedge electricity derivatives, using fuel forwards as hedging instruments. Then, we propose an algorithm, which combines Monte-Carlo simulations with local basis regressions, to solve general optimal switching problems. A comprehensive rate of convergence of the method is provided. Moreover, we manage to make the algorithm parsimonious in memory (and hence suitable for high dimensional problems) by generalizing to this framework a memory reduction method that avoids the storage of the sample paths. We illustrate this on the problem of investments in new power plants (our structural power price model allowing the new plants to impact the price of electricity). Finally, we study more general stochastic control problems (the control can be continuous and impact the drift and volatility of the state process), the solutions of which belong to the class of fully nonlinear Hamilton-Jacobi-Bellman equations, and can be handled via constrained Backward Stochastic Differential Equations, for which we develop a backward algorithm based on control randomization and parametric optimizations. A rate of convergence between the constraPned BSDE and its discrete version is provided, as well as an estimate of the optimal control. This algorithm is then applied to the problem of super replication of options under uncertain volatilities (and correlations). (author)

  3. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    Science.gov (United States)

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  4. Multi-dimensional Bin Packing Problems with Guillotine Constraints

    DEFF Research Database (Denmark)

    Amossen, Rasmus Resen; Pisinger, David

    2010-01-01

    The problem addressed in this paper is the decision problem of determining if a set of multi-dimensional rectangular boxes can be orthogonally packed into a rectangular bin while satisfying the requirement that the packing should be guillotine cuttable. That is, there should exist a series of face...... parallel straight cuts that can recursively cut the bin into pieces so that each piece contains a box and no box has been intersected by a cut. The unrestricted problem is known to be NP-hard. In this paper we present a generalization of a constructive algorithm for the multi-dimensional bin packing...... problem, with and without the guillotine constraint, based on constraint programming....

  5. Moving Least Squares Method for a One-Dimensional Parabolic Inverse Problem

    Directory of Open Access Journals (Sweden)

    Baiyu Wang

    2014-01-01

    Full Text Available This paper investigates the numerical solution of a class of one-dimensional inverse parabolic problems using the moving least squares approximation; the inverse problem is the determination of an unknown source term depending on time. The collocation method is used for solving the equation; some numerical experiments are presented and discussed to illustrate the stability and high efficiency of the method.

  6. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points

    Science.gov (United States)

    Regis, Rommel G.

    2014-02-01

    This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.

  7. High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems

    International Nuclear Information System (INIS)

    Wachowiak, M P; Sarlo, B B; Foster, A E Lambe

    2014-01-01

    Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task

  8. Solution of the two-dimensional spectral factorization problem

    Science.gov (United States)

    Lawton, W. M.

    1985-01-01

    An approximation theorem is proven which solves a classic problem in two-dimensional (2-D) filter theory. The theorem shows that any continuous two-dimensional spectrum can be uniformly approximated by the squared modulus of a recursively stable finite trigonometric polynomial supported on a nonsymmetric half-plane.

  9. A numerical algorithm for optimal feedback gains in high dimensional linear quadratic regulator problems

    Science.gov (United States)

    Banks, H. T.; Ito, K.

    1991-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.

  10. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  11. Engineering two-photon high-dimensional states through quantum interference

    Science.gov (United States)

    Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew

    2016-01-01

    Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685

  12. Simplified two and three dimensional HTTR benchmark problems

    International Nuclear Information System (INIS)

    Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.

    2011-01-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  13. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  14. A finite-dimensional reduction method for slightly supercritical elliptic problems

    Directory of Open Access Journals (Sweden)

    Riccardo Molle

    2004-01-01

    Full Text Available We describe a finite-dimensional reduction method to find solutions for a class of slightly supercritical elliptic problems. A suitable truncation argument allows us to work in the usual Sobolev space even in the presence of supercritical nonlinearities: we modify the supercritical term in such a way to have subcritical approximating problems; for these problems, the finite-dimensional reduction can be obtained applying the methods already developed in the subcritical case; finally, we show that, if the truncation is realized at a sufficiently large level, then the solutions of the approximating problems, given by these methods, also solve the supercritical problems when the parameter is small enough.

  15. An inverse problem for a one-dimensional time-fractional diffusion problem

    KAUST Repository

    Jin, Bangti; Rundell, William

    2012-01-01

    We study an inverse problem of recovering a spatially varying potential term in a one-dimensional time-fractional diffusion equation from the flux measurements taken at a single fixed time corresponding to a given set of input sources. The unique

  16. Variables separation and superintegrability of the nine-dimensional MICZ-Kepler problem

    Science.gov (United States)

    Phan, Ngoc-Hung; Le, Dai-Nam; Thoi, Tuan-Quoc N.; Le, Van-Hoang

    2018-03-01

    The nine-dimensional MICZ-Kepler problem is of recent interest. This is a system describing a charged particle moving in the Coulomb field plus the field of a SO(8) monopole in a nine-dimensional space. Interestingly, this problem is equivalent to a 16-dimensional harmonic oscillator via the Hurwitz transformation. In the present paper, we report on the multiseparability, a common property of superintegrable systems, and the superintegrability of the problem. First, we show the solvability of the Schrödinger equation of the problem by the variables separation method in different coordinates. Second, based on the SO(10) symmetry algebra of the system, we construct explicitly a set of seventeen invariant operators, which are all in the second order of the momentum components, satisfying the condition of superintegrability. The found number 17 coincides with the prediction of (2n - 1) law of maximal superintegrability order in the case n = 9. Until now, this law is accepted to apply only to scalar Hamiltonian eigenvalue equations in n-dimensional space; therefore, our results can be treated as evidence that this definition of superintegrability may also apply to some vector equations such as the Schrödinger equation for the nine-dimensional MICZ-Kepler problem.

  17. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  18. Decay rate in a multi-dimensional fission problem

    Energy Technology Data Exchange (ETDEWEB)

    Brink, D M; Canto, L F

    1986-06-01

    The multi-dimensional diffusion approach of Zhang Jing Shang and Weidenmueller (1983 Phys. Rev. C28, 2190) is used to study a simplified model for induced fission. In this model it is shown that the coupling of the fission coordinate to the intrinsic degrees of freedom is equivalent to an extra friction and a mass correction in the corresponding one-dimensional problem.

  19. Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems

    Directory of Open Access Journals (Sweden)

    DimitrisG. Stavrakoudis

    2012-04-01

    Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.

  20. Two-dimensional boundary-value problem for ion-ion diffusion

    International Nuclear Information System (INIS)

    Tuszewski, M.; Lichtenberg, A.J.

    1977-01-01

    Like-particle diffusion is usually negligible compared with unlike-particle diffusion because it is two orders higher in spatial derivatives. When the ratio of the ion gyroradius to the plasma transverse dimension is of the order of the fourth root of the mass ratio, previous one-dimensional analysis indicated that like-particle diffusion is significant. A two-dimensional boundary-value problem for ion-ion diffusion is investigated. Numerical solutions are found with models for which the nonlinear partial differential equation reduces to an ordinary fourth-order differential equation. These solutions indicate that the ion-ion losses are higher by a factor of six for a slab geometry, and by a factor of four for circular geometry, than estimated from dimensional analysis. The solutions are applied to a multiple mirror experiment stabilized with a quadrupole magnetic field which generates highly elliptical flux surfaces. It is found that the ion-ion losses dominate the electron-ion losses and that these classical radial losses contribute to a significant decrease of plasma lifetime, in qualitiative agreement with the experimental results

  1. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    International Nuclear Information System (INIS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-01-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  2. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Science.gov (United States)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  3. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  4. One-dimensional inverse problems of mathematical physics

    CERN Document Server

    Lavrent'ev, M M; Yakhno, V G; Schulenberger, J R

    1986-01-01

    This monograph deals with the inverse problems of determining a variable coefficient and right side for hyperbolic and parabolic equations on the basis of known solutions at fixed points of space for all times. The problems are one-dimensional in nature since the desired coefficient of the equation is a function of only one coordinate, while the desired right side is a function only of time. The authors use methods based on the spectral theory of ordinary differential operators of second order and also methods which make it possible to reduce the investigation of the inverse problems to the in

  5. Approximate solutions for the two-dimensional integral transport equation. Solution of complex two-dimensional transport problems

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1980-11-01

    This work is divided into two parts: the first part deals with the solution of complex two-dimensional transport problems, the second one (note CEA-N-2166) treats the critically mixed methods of resolution. A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the interface current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding, and water, or homogenized structural material. The cells are divided into zones that are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is effected by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: CALLIOPE uses a cylindrical cell model and one or three terms for the flux expansion, and NAUSICAA uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes, one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark problems and by calculations performed in the APOLLO multigroup code [fr

  6. Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids

    NARCIS (Netherlands)

    bin Zubair, H.; Oosterlee, C.E.; Wienands, R.

    2006-01-01

    This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We

  7. Problems of high temperature superconductivity in three-dimensional systems

    Energy Technology Data Exchange (ETDEWEB)

    Geilikman, B T

    1973-01-01

    A review is given of more recent papers on this subject. These papers have dealt mainly with two-dimensional systems. The present paper extends the treatment to three-dimensional systems, under the following headings: systems with collective electrons of one group and localized electrons of another group (compounds of metals with non-metals-dielectrics, organic substances, undoped semiconductors, molecular crystals); experimental investigations of superconducting compounds of metals with organic compounds, dielectrics, semiconductors, and semi-metals; and systems with two or more groups of collective electrons. Mechanics are considered and models are derived. 86 references.

  8. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  9. The dimension split element-free Galerkin method for three-dimensional potential problems

    Science.gov (United States)

    Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.

    2018-02-01

    This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.

  10. Complexity of hierarchically and 1-dimensional periodically specified problems

    Energy Technology Data Exchange (ETDEWEB)

    Marathe, M.V.; Hunt, H.B. III; Stearns, R.E.; Radhakrishnan, V.

    1995-08-23

    We study the complexity of various combinatorial and satisfiability problems when instances are specified using one of the following specifications: (1) the 1-dimensional finite periodic narrow specifications of Wanke and Ford et al. (2) the 1-dimensional finite periodic narrow specifications with explicit boundary conditions of Gale (3) the 2-way infinite1-dimensional narrow periodic specifications of Orlin et al. and (4) the hierarchical specifications of Lengauer et al. we obtain three general types of results. First, we prove that there is a polynomial time algorithm that given a 1-FPN- or 1-FPN(BC)specification of a graph (or a C N F formula) constructs a level-restricted L-specification of an isomorphic graph (or formula). This theorem along with the hardness results proved here provides alternative and unified proofs of many hardness results proved in the past either by Lengauer and Wagner or by Orlin. Second, we study the complexity of generalized CNF satisfiability problems of Schaefer. Assuming P {ne} PSPACE, we characterize completely the polynomial time solvability of these problems, when instances are specified as in (1), (2),(3) or (4). As applications of our first two types of results, we obtain a number of new PSPACE-hardness and polynomial time algorithms for problems specified as in (1), (2), (3) or(4). Many of our results also hold for O(log N) bandwidth bounded planar instances.

  11. The 'thousand words' problem: Summarizing multi-dimensional data

    International Nuclear Information System (INIS)

    Scott, David M.

    2011-01-01

    Research highlights: → Sophisticated process sensors produce large multi-dimensional data sets. → Plant control systems cannot handle images or large amounts of data. → Various techniques reduce the dimensionality, extracting information from raw data. → Simple 1D and 2D methods can often be extended to 3D and 4D applications. - Abstract: An inherent difficulty in the application of multi-dimensional sensing to process monitoring and control is the extraction and interpretation of useful information. Ultimately the measured data must be collapsed into a relatively small number of values that capture the salient characteristics of the process. Although multiple dimensions are frequently necessary to isolate a particular physical attribute (such as the distribution of a particular chemical species in a reactor), plant control systems are not equipped to use such data directly. The production of a multi-dimensional data set (often displayed as an image) is not the final step of the measurement process, because information must still be extracted from the raw data. In the metaphor of one picture being equal to a thousand words, the problem becomes one of paraphrasing a lengthy description of the image with one or two well-chosen words. Various approaches to solving this problem are discussed using examples from the fields of particle characterization, image processing, and process tomography.

  12. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    Science.gov (United States)

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  13. Irregular grid methods for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.

    2004-01-01

    This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of

  14. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    Science.gov (United States)

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  15. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  16. High-Dimensional Metrics in R

    OpenAIRE

    Chernozhukov, Victor; Hansen, Chris; Spindler, Martin

    2016-01-01

    The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...

  17. An analytical approach for a nodal scheme of two-dimensional neutron transport problems

    International Nuclear Information System (INIS)

    Barichello, L.B.; Cabrera, L.C.; Prolo Filho, J.F.

    2011-01-01

    Research highlights: → Nodal equations for a two-dimensional neutron transport problem. → Analytical Discrete Ordinates Method. → Numerical results compared with the literature. - Abstract: In this work, a solution for a two-dimensional neutron transport problem, in cartesian geometry, is proposed, on the basis of nodal schemes. In this context, one-dimensional equations are generated by an integration process of the multidimensional problem. Here, the integration is performed for the whole domain such that no iterative procedure between nodes is needed. The ADO method is used to develop analytical discrete ordinates solution for the one-dimensional integrated equations, such that final solutions are analytical in terms of the spatial variables. The ADO approach along with a level symmetric quadrature scheme, lead to a significant order reduction of the associated eigenvalues problems. Relations between the averaged fluxes and the unknown fluxes at the boundary are introduced as the usually needed, in nodal schemes, auxiliary equations. Numerical results are presented and compared with test problems.

  18. Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems

    International Nuclear Information System (INIS)

    BAER, THOMAS A.; SACKINGER, PHILIP A.; SUBIA, SAMUEL R.

    1999-01-01

    Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-static solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance

  19. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    Science.gov (United States)

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  20. Dimensional analysis and qualitative methods in problem solving: II

    International Nuclear Information System (INIS)

    Pescetti, D

    2009-01-01

    We show that the underlying mathematical structure of dimensional analysis (DA), in the qualitative methods in problem-solving context, is the algebra of the affine spaces. In particular, we show that the qualitative problem-solving procedure based on the parallel decomposition of a problem into simple special cases yields the new original mathematical concepts of special points and special representations of affine spaces. A qualitative problem-solving algorithm piloted by the mathematics of DA is illustrated by a set of examples.

  1. Model-based Clustering of High-Dimensional Data in Astrophysics

    Science.gov (United States)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  2. Global communication schemes for the numerical solution of high-dimensional PDEs

    DEFF Research Database (Denmark)

    Hupp, Philipp; Heene, Mario; Jacob, Riko

    2016-01-01

    The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...

  3. A numerical method for two-dimensional anisotropic transport problem in cylindrical geometry

    International Nuclear Information System (INIS)

    Du Mingsheng; Feng Tiekai; Fu Lianxiang; Cao Changshu; Liu Yulan

    1988-01-01

    The authors deal with the triangular mesh-discontinuous finite element method for solving the time-dependent anisotropic neutron transport problem in two-dimensional cylindrical geometry. A prior estimate of the numerical solution is given. Stability is proved. The authors have computed a two dimensional anisotropic neutron transport problem and a Tungsten-Carbide critical assembly problem by using the numerical method. In comparision with DSN method and the experimental results obtained by others both at home and abroad, the method is satisfactory

  4. Inverse radiative transfer problems in two-dimensional heterogeneous media

    International Nuclear Information System (INIS)

    Tito, Mariella Janette Berrocal

    2001-01-01

    The analysis of inverse problems in participating media where emission, absorption and scattering take place has several relevant applications in engineering and medicine. Some of the techniques developed for the solution of inverse problems have as a first step the solution of the direct problem. In this work the discrete ordinates method has been used for the solution of the linearized Boltzmann equation in two dimensional cartesian geometry. The Levenberg - Marquardt method has been used for the solution of the inverse problem of internal source and absorption and scattering coefficient estimation. (author)

  5. Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors

    International Nuclear Information System (INIS)

    Lucka, Felix

    2012-01-01

    Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion. Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this paper, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis–Hastings (MH) sampling schemes. We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference. (paper)

  6. Green function of a three-dimensional Wick problem

    International Nuclear Information System (INIS)

    Matveev, V.A.

    1988-01-01

    An exact solution of a three-dimensional Coulomb Wick-Cutkovsky problem has been obtained which possesses the hidden 0(4)-symmetry. Here we shell give the derivation of the corresponding Green function and consider its connection with the asymptoric behaviour of the scattering amplitude. 9 refs

  7. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  8. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  9. Three-dimensional problems in the theory of cracks

    International Nuclear Information System (INIS)

    Panasyuk, V.V.; Andrejkiv, A.E.; Stadnik, M.M.

    1979-01-01

    Review of the main mechanical conceptions and mathematic methods, used in solving of spatial problems of the theory of cracks is given. At that, cases of effects upon a body of force static and cyclic and geometrically variable temperature fields are considered. The main calculation models of the theory of cracks are characterized in detail. Other models, derived from these ones and used in solving the above problems are also mentioned. Analysis and synthesis of the most general mathematic methods of solving three-dimensional problems of the theory of cracks are made. Besides precise methods, approximate ones are also presented, being efficient enough in engineering practice

  10. TWO-DIMENSIONAL APPROXIMATION OF EIGENVALUE PROBLEMS IN SHELL THEORY: FLEXURAL SHELLS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The eigenvalue problem for a thin linearly elastic shell, of thickness 2e, clamped along its lateral surface is considered. Under the geometric assumption on the middle surface of the shell that the space of inextensional displacements is non-trivial, the authors obtain, as ε→0,the eigenvalue problem for the two-dimensional"flexural shell"model if the dimension of the space is infinite. If the space is finite dimensional, the limits of the eigenvalues could belong to the spectra of both flexural and membrane shells. The method consists of rescaling the variables and studying the problem over a fixed domain. The principal difficulty lies in obtaining suitable a priori estimates for the scaled eigenvalues.

  11. An irregular grid approach for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2008-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting. The method is centered around the approximation of the associated complementarity problem on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  12. An Irregular Grid Approach for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting.The method is centred around the approximation of the associated complementarity problem on an irregular grid.We approximate the partial differential operator on this grid by appealing to the SDE

  13. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  14. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  15. One-dimensional computational modeling on nuclear reactor problems

    International Nuclear Information System (INIS)

    Alves Filho, Hermes; Baptista, Josue Costa; Trindade, Luiz Fernando Santos; Heringer, Juan Diego dos Santos

    2013-01-01

    In this article, we present a computational modeling, which gives us a dynamic view of some applications of Nuclear Engineering, specifically in the power distribution and the effective multiplication factor (keff) calculations. We work with one-dimensional problems of deterministic neutron transport theory, with the linearized Boltzmann equation in the discrete ordinates (SN) formulation, independent of time, with isotropic scattering and then built a software (Simulator) for modeling computational problems used in a typical calculations. The program used in the implementation of the simulator was Matlab, version 7.0. (author)

  16. Quantum trajectories in complex space: One-dimensional stationary scattering problems

    International Nuclear Information System (INIS)

    Chou, C.-C.; Wyatt, Robert E.

    2008-01-01

    One-dimensional time-independent scattering problems are investigated in the framework of the quantum Hamilton-Jacobi formalism. The equation for the local approximate quantum trajectories near the stagnation point of the quantum momentum function is derived, and the first derivative of the quantum momentum function is related to the local structure of quantum trajectories. Exact complex quantum trajectories are determined for two examples by numerically integrating the equations of motion. For the soft potential step, some particles penetrate into the nonclassical region, and then turn back to the reflection region. For the barrier scattering problem, quantum trajectories may spiral into the attractors or from the repellers in the barrier region. Although the classical potentials extended to complex space show different pole structures for each problem, the quantum potentials present the same second-order pole structure in the reflection region. This paper not only analyzes complex quantum trajectories and the total potentials for these examples but also demonstrates general properties and similar structures of the complex quantum trajectories and the quantum potentials for one-dimensional time-independent scattering problems

  17. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  18. The scalar curvature problem on the four dimensional half sphere

    CERN Document Server

    Ben-Ayed, M; El-Mehdi, K

    2003-01-01

    In this paper, we consider the problem of prescribing the scalar curvature under minimal boundary conditions on the standard four dimensional half sphere. We provide an Euler-Hopf type criterion for a given function to be a scalar curvature for some metric conformal to the standard one. Our proof involves the study of critical points at infinity of the associated variational problem.

  19. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  20. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca

    2013-01-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  1. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza

    2013-08-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  2. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  3. A comparison of two efficient nonlinear heat conduction methodologies using a two-dimensional time-dependent benchmark problem

    International Nuclear Information System (INIS)

    Wilson, G.L.; Rydin, R.A.; Orivuori, S.

    1988-01-01

    Two highly efficient nonlinear time-dependent heat conduction methodologies, the nonlinear time-dependent nodal integral technique (NTDNT) and IVOHEAT are compared using one- and two-dimensional time-dependent benchmark problems. The NTDNT is completely based on newly developed time-dependent nodal integral methods, whereas IVOHEAT is based on finite elements in space and Crank-Nicholson finite differences in time. IVOHEAT contains the geometric flexibility of the finite element approach, whereas the nodal integral method is constrained at present to Cartesian geometry. For test problems where both methods are equally applicable, the nodal integral method is approximately six times more efficient per dimension than IVOHEAT when a comparable overall accuracy is chosen. This translates to a factor of 200 for a three-dimensional problem having relatively homogeneous regions, and to a smaller advantage as the degree of heterogeneity increases

  4. Problems associated with dimensional analysis of electroencephalogram data

    Energy Technology Data Exchange (ETDEWEB)

    Layne, S.; Mayer-Kress, G.; Holzfuss, J.

    1985-01-01

    The goal was to evaluate anesthetic depth for a series of 5 to 10 patients by dimensional analysis. It has been very difficult to obtain clean EEG records from the operating room. Noise is prominent due to electrocautery and to movement of the patient's head by operating room personnel. In addition, specialized EEG equipment must be used to reduce noise and to accommodate limited space in the room. This report discusses problems associated with dimensional analysis of the EEG. We choose one EEG record from a single patient, in order to study the method but not to draw general conclusions. For simplicity, we consider only two states: awake but quiet, and medium anesthesia. 14 refs., 8 figs., 1 tab.

  5. Orbits of the n-dimensional Kepler-Coulomb problem and universality of the Kepler laws

    International Nuclear Information System (INIS)

    Oender, M; Vercin, A

    2006-01-01

    In the standard classical mechanics textbooks used at undergraduate and graduate levels, no attention is paid to the dimensional aspects of the Kepler-Coulomb problem. We have shown that the orbits of the n-dimensional classical Kepler-Coulomb problem are the usual conic sections in a fixed two-dimensional subspace and the Kepler laws with their well-known forms are valid independent of dimension. The basic characteristics of motion in a central force field are also established in an arbitrary dimension. The approach followed is easily accessible to late undergraduate and recent graduate students

  6. An iterative bidirectional heuristic placement algorithm for solving the two-dimensional knapsack packing problem

    Science.gov (United States)

    Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae

    2018-02-01

    This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.

  7. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  8. The two-dimensional cutting stock problem within the roller blind production process

    NARCIS (Netherlands)

    E.R. de Gelder; A.P.M. Wagelmans (Albert)

    2007-01-01

    textabstractIn this paper we consider a two-dimensional cutting stock problem encountered at a large manufacturer of window covering products. The problem occurs in the production process of made-to-measure roller blinds. We develop a solution method that takes into account the characteristics of

  9. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2010-01-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii

  10. Class prediction for high-dimensional class-imbalanced data

    Directory of Open Access Journals (Sweden)

    Lusa Lara

    2010-10-01

    Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class

  11. Manifold learning to interpret JET high-dimensional operational space

    International Nuclear Information System (INIS)

    Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A

    2013-01-01

    In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)

  12. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    Science.gov (United States)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  13. About the problem of generating three-dimensional pseudo-random points.

    Science.gov (United States)

    Carpintero, D. D.

    The author demonstrates that a popular pseudo-random number generator is not adequate in some circumstances to generate n-dimensional random points, n > 2. This problem is particularly noxious when direction cosines are generated. He proposes several soultions, among them a good generator that satisfies all statistical criteria.

  14. Three-Dimensional Electromagnetic High Frequency Axisymmetric Cavity Scars.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt

    2014-10-01

    This report examines the localization of high frequency electromagnetic fi elds in three-dimensional axisymmetric cavities along periodic paths between opposing sides of the cavity. The cases where these orbits lead to unstable localized modes are known as scars. This report treats both the case where the opposing sides, or mirrors, are convex, where there are no interior foci, and the case where they are concave, leading to interior foci. The scalar problem is treated fi rst but the approximations required to treat the vector fi eld components are also examined. Particular att ention is focused on the normalization through the electromagnetic energy theorem. Both projections of the fi eld along the scarred orbit as well as point statistics are examined. Statistical comparisons are m ade with a numerical calculation of the scars run with an axisymmetric simulation. This axisymmetric cas eformstheoppositeextreme(wherethetwomirror radii at each end of the ray orbit are equal) from the two -dimensional solution examined previously (where one mirror radius is vastly di ff erent from the other). The enhancement of the fi eldontheorbitaxiscanbe larger here than in the two-dimensional case. Intentionally Left Blank

  15. Solving one-dimensional phase change problems with moving grid method and mesh free radial basis functions

    International Nuclear Information System (INIS)

    Vrankar, L.; Turk, G.; Runovc, F.; Kansa, E.J.

    2006-01-01

    Many heat-transfer problems involve a change of phase of material due to solidification or melting. Applications include: the safety studies of nuclear reactors (molten core concrete interaction), the drilling of high ice-content soil, the storage of thermal energy, etc. These problems are often called Stefan's or moving boundary value problems. Mathematically, the interface motion is expressed implicitly in an equation for the conservation of thermal energy at the interface (Stefan's conditions). This introduces a non-linear character to the system which treats each problem somewhat uniquely. The exact solution of phase change problems is limited exclusively to the cases in which e.g. the heat transfer regions are infinite or semi-infinite one dimensional-space. Therefore, solution is obtained either by approximate analytical solution or by numerical methods. Finite-difference methods and finite-element techniques have been used extensively for numerical solution of moving boundary problems. Recently, the numerical methods have focused on the idea of using a mesh-free methodology for the numerical solution of partial differential equations based on radial basis functions. In our case we will study solid-solid transformation. The numerical solutions will be compared with analytical solutions. Actually, in our work we will examine usefulness of radial basis functions (especially multiquadric-MQ) for one-dimensional Stefan's problems. The position of the moving boundary will be simulated by moving grid method. The resultant system of RBF-PDE will be solved by affine space decomposition. (author)

  16. The ADO-nodal method for solving two-dimensional discrete ordinates transport problems

    International Nuclear Information System (INIS)

    Barichello, L.B.; Picoloto, C.B.; Cunha, R.D. da

    2017-01-01

    Highlights: • Two-dimensional discrete ordinates neutron transport. • Analytical Discrete Ordinates (ADO) nodal method. • Heterogeneous media fixed source problems. • Local solutions. - Abstract: In this work, recent results on the solution of fixed-source two-dimensional transport problems, in Cartesian geometry, are reported. Homogeneous and heterogeneous media problems are considered in order to incorporate the idea of arbitrary number of domain division into regions (nodes) when applying the ADO method, which is a method of analytical features, to those problems. The ADO-nodal formulation is developed, for each node, following previous work devoted to heterogeneous media problem. Here, however, the numerical procedure is extended to higher number of domain divisions. Such extension leads, in some cases, to the use of an iterative method for solving the general linear system which defines the arbitrary constants of the general solution. In addition to solve alternative heterogeneous media configurations than reported in previous works, the present approach allows comparisons with results provided by other metodologies generated with refined meshes. Numerical results indicate the ADO solution may achieve a prescribed accuracy using coarser meshes than other schemes.

  17. Comment on "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit".

    Science.gov (United States)

    Carrillo-Bernal, M A; Núñez-Yépez, H N; Salas-Brito, A L; Solis, Didier A

    2015-02-01

    In the referred paper, the authors use a numerical method for solving ordinary differential equations and a softened Coulomb potential -1/√[x(2)+β(2)] to study the one-dimensional Coulomb problem by approaching the parameter β to zero. We note that even though their numerical findings in the soft potential scenario are correct, their conclusions do not extend to the one-dimensional Coulomb problem (β=0). Their claims regarding the possible existence of an even ground state with energy -∞ with a Dirac-δ eigenfunction and of well-defined parity eigenfunctions in the one-dimensional hydrogen atom are questioned.

  18. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  19. Covariance problem in two-dimensional quantum chromodynamics

    International Nuclear Information System (INIS)

    Hagen, C.R.

    1979-01-01

    The problem of covariance in the field theory of a two-dimensional non-Abelian gauge field is considered. Since earlier work has shown that covariance fails (in charged sectors) for the Schwinger model, particular attention is given to an evaluation of the role played by the non-Abelian nature of the fields. In contrast to all earlier attempts at this problem, it is found that the potential covariance-breaking terms are identical to those found in the Abelian theory provided that one expresses them in terms of the total (i.e., conserved) current operator. The question of covariance is thus seen to reduce in all cases to a determination as to whether there exists a conserved global charge in the theory. Since the charge operator in the Schwinger model is conserved only in neutral sectors, one is thereby led to infer a probable failure of covariance in the non-Abelian theory, but one which is identical to that found for the U(1) case

  20. Use of endochronic plasticity for multi-dimensional small and large strain problems

    International Nuclear Information System (INIS)

    Hsieh, B.J.

    1980-04-01

    The endochronic plasticity theory was proposed in its general form by K.C. Valanis. An intrinsic time measure, which is a property of the material, is used in the theory. the explicit forms of the constitutive equation resemble closely those of the classical theory of linear viscoelasticity. Excellent agreement between the predicted and experimental results is obtained for some metallic and non-metallic materials for one dimensional cases. No reference on the use of endochronic plasticity consistent with the general theory proposed by Valanis is available in the open literature. In this report, the explicit constitutive equations are derived that are consistent with the general theory for one-dimensional (simple tension or compression), two-dimensional plane strain or stress and three-dimensional axisymmetric problems

  1. Resolvent approach for two-dimensional scattering problems. Application to the nonstationary Schroedinger problem and the KPI equation

    International Nuclear Information System (INIS)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A.K.; Polivanov, M.C.

    1993-01-01

    The resolvent operator of the linear problem is determined as the full Green function continued in the complex domain in two variables. An analog of the known Hilbert identity is derived. The authors demonstrate the role of this identity in the study of two-dimensional scattering. Considering the nonstationary Schroedinger equation as an example, it is shown that all types of solutions of the linear problem, as well as spectral data known in the literature, are given as specific values of this unique function - the resolvent function. A new form of the inverse problem is formulated. 7 refs

  2. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    Science.gov (United States)

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  3. Collisional plasma transport: two-dimensional scalar formulation of the initial boundary value problem and quasi one-dimensional models

    International Nuclear Information System (INIS)

    Mugge, J.W.

    1979-10-01

    The collisional plasma transport problem is formulated as an initial boundary value problem for general characteristic boundary conditions. Starting from the full set of hydrodynamic and electrodynamic equations an expansion in the electron-ion mass ratio together with a multiple timescale method yields simplified equations on each timescale. On timescales where many collisions have taken place for the simplified equations the initial boundary value problem is formulated. Through the introduction of potentials a two-dimensional scalar formulation in terms of quasi-linear integro-differential equations of second order for a domain consisting of plasma and vacuum sub-domains is obtained. (Auth.)

  4. An inverse problem for a one-dimensional time-fractional diffusion problem

    KAUST Repository

    Jin, Bangti

    2012-06-26

    We study an inverse problem of recovering a spatially varying potential term in a one-dimensional time-fractional diffusion equation from the flux measurements taken at a single fixed time corresponding to a given set of input sources. The unique identifiability of the potential is shown for two cases, i.e. the flux at one end and the net flux, provided that the set of input sources forms a complete basis in L 2(0, 1). An algorithm of the quasi-Newton type is proposed for the efficient and accurate reconstruction of the coefficient from finite data, and the injectivity of the Jacobian is discussed. Numerical results for both exact and noisy data are presented. © 2012 IOP Publishing Ltd.

  5. On the equivalence of four-dimensional self-duality equations to the continual analogue of the principal chiral field problem

    International Nuclear Information System (INIS)

    Leznov, A.N.

    1987-01-01

    A connection is found between the self-dual equations of 4-dimensional space and the principal chiral field problem in n-dimensional space. It is shown that any solution of the principal chiral field equations in n-dimensional space with arbitrary 2-dimensional functions of definite linear combinations of 4 variables y, y-bar, z, z-bar as independent arguments satisfies the system of self-dual equations of 4-dimensional space. General solution of self-dual equations depending on the suitable number of functions of three independent variables coincides with the general solution of the principal chiral field problem when the dimensionality of the space tends to the infinity

  6. Effectiveness of Self Instructional Module on Coping Strategies of Tri-Dimensional Problems of Premenopausal Women – A Community Based Study

    Science.gov (United States)

    Boro, Enu; Jamil, MD; Roy, Aakash

    2016-01-01

    Introduction Pre-menopause in women presents with diverse symptoms, encompassing the tri-dimensional spheres of physical, social and psychological domains, which requires development of appropriate coping strategies to overcome these problems. Aim To assess level of knowledge about tri-dimensional problems in pre-menopausal women and evaluate effectiveness of self instruction module on coping strategies of these problems by pre-test and post-test analysis. Materials and Methods In a cross-sectional, community based study, in pre-menopausal women aged 40-49years baseline knowledge of tridimensional problems was assessed in 300 pre-menopausal women, selected by convenient sampling after satisfying selection criteria, by a pre-formed questionnaire. This was followed by administration of a pre-tested, Self-Instructional Module (SIM). The SIM dealt with imparting knowledge about coping strategies regarding pre-menopausal problems and the participants were required to read and retain the SIM. Post-test was conducted using same questionnaire after seven days. Statistical Analysis Chi-square test/ Paired t-test was used for comparing ratios. A ‘p-value’ <0.05 was considered statistically significant. Results Baseline knowledge of tridimensional problems was adequate in 10%, moderate in 73% and inadequate in 17% women with a pre-test mean knowledge score of 8.66±2.45. The post-test mean knowledge score was higher (19.11±3.38) compared to the pre-test score. The post-test mean knowledge difference from pre-test was -10.45 with a highly significant paired t-value of -47.45 indicating that the self-instructional module was effective in increasing the knowledge score of pre- menopausal women under study. Conclusion Administration of self instructional module was shown to significantly increase the knowledge scores in all areas of pre-menopausal tri-dimensional problems. Such self-instructional module can be used as an effective educational tool in increasing the knowledge

  7. Inverse Problem for Two-Dimensional Discrete Schr`dinger Equation

    CERN Document Server

    Serdyukova, S I

    2000-01-01

    For two-dimensional discrete Schroedinger equation the boundary-value problem in rectangle M times N with zero boundary conditions is solved. It's stated in this work, that inverse problem reduces to reconstruction of C symmetric five-diagonal matrix with given spectrum and given first k(M,N), 1<-kproblem to the end in the process of concrete calculations. Deriving and solving the huge polynomial systems had been perfor...

  8. Detecting low-dimensional chaos by the “noise titration” technique: Possible problems and remedies

    International Nuclear Information System (INIS)

    Gao Jianbo; Hu Jing; Mao Xiang; Tung Wenwen

    2012-01-01

    Highlights: ► Distinguishing low-dimensional chaos from noise is an important issue. ► Noise titration technique is one of the main approaches on the issue. ► Problems of noise titration technique are systematically discussed. ► Solutions to the problems of noise titration technique are provided. - Abstract: Distinguishing low-dimensional chaos from noise is an important issue in time series analysis. Among the many methods proposed for this purpose is the noise titration technique, which quantifies the amount of noise that needs to be added to the signal to fully destroy its nonlinearity. Two groups of researchers recently have questioned the validity of the technique. In this paper, we report a broad range of situations where the noise titration technique fails, and offer solutions to fix the problems identified.

  9. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  10. Two-dimensional unsteady lift problems in supersonic flight

    Science.gov (United States)

    Heaslet, Max A; Lomax, Harvard

    1949-01-01

    The variation of pressure distribution is calculated for a two-dimensional supersonic airfoil either experiencing a sudden angle-of-attack change or entering a sharp-edge gust. From these pressure distributions the indicial lift functions applicable to unsteady lift problems are determined for two cases. Results are presented which permit the determination of maximum increment in lift coefficient attained by an unrestrained airfoil during its flight through a gust. As an application of these results, the minimum altitude for safe flight through a specific gust is calculated for a particular supersonic wing of given strength and wing loading.

  11. Statistical mechanics of complex neural systems and high dimensional data

    International Nuclear Information System (INIS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-01-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. (paper)

  12. Using Localised Quadratic Functions on an Irregular Grid for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose a method for pricing high-dimensional American options on an irregular grid; the method involves using quadratic functions to approximate the local effect of the Black-Scholes operator.Once such an approximation is known, one can solve the pricing problem by time stepping in an explicit

  13. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin.

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-02-02

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin.

  14. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-01-01

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868

  15. A parallel algorithm for solving linear equations arising from one-dimensional network problems

    International Nuclear Information System (INIS)

    Mesina, G.L.

    1991-01-01

    One-dimensional (1-D) network problems, such as those arising from 1- D fluid simulations and electrical circuitry, produce systems of sparse linear equations which are nearly tridiagonal and contain a few non-zero entries outside the tridiagonal. Most direct solution techniques for such problems either do not take advantage of the special structure of the matrix or do not fully utilize parallel computer architectures. We describe a new parallel direct linear equation solution algorithm, called TRBR, which is especially designed to take advantage of this structure on MIMD shared memory machines. The new method belongs to a family of methods which split the coefficient matrix into the sum of a tridiagonal matrix T and a matrix comprised of the remaining coefficients R. Efficient tridiagonal methods are used to algebraically simplify the linear system. A smaller auxiliary subsystem is created and solved and its solution is used to calculate the solution of the original system. The newly devised BR method solves the subsystem. The serial and parallel operation counts are given for the new method and related earlier methods. TRBR is shown to have the smallest operation count in this class of direct methods. Numerical results are given. Although the algorithm is designed for one-dimensional networks, it has been applied successfully to three-dimensional problems as well. 20 refs., 2 figs., 4 tabs

  16. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  17. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  18. Resolvent approach for two-dimensional scattering problems. Application to the nonstationary Schrödinger problem and the KPI equation

    Science.gov (United States)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.; Polivanov, M. C.

    1992-11-01

    The resolvent operator of the linear problem is determined as the full Green function continued in the complex domain in two variables. An analog of the known Hilbert identity is derived. We demonstrate the role of this identity in the study of two-dimensional scattering. Considering the nonstationary Schrödinger equation as an example, we show that all types of solutions of the linear problems, as well as spectral data known in the literature, are given as specific values of this unique function — the resolvent function. A new form of the inverse problem is formulated.

  19. A two-dimensional embedded-boundary method for convection problems with moving boundaries

    NARCIS (Netherlands)

    Y.J. Hassen (Yunus); B. Koren (Barry)

    2010-01-01

    htmlabstractIn this work, a two-dimensional embedded-boundary algorithm for convection problems is presented. A moving body of arbitrary boundary shape is immersed in a Cartesian finite-volume grid, which is fixed in space. The boundary surface is reconstructed in such a way that only certain fluxes

  20. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  1. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    Science.gov (United States)

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  2. Modification of equivalent charge method for the Roben three-dimensional problem in electrostatics

    International Nuclear Information System (INIS)

    Barsukov, A.B.; Surenskij, A.V.

    1989-01-01

    The approach of the Roben problem solution for the calculation of the potential of intermediate electrode of accelerating structure with HFQ focusing is considered. The solution is constructed on the basis of variational formulation of the equivalent charge method, where electrostatic problem is reduced to equations of root-mean-square residuals on the system conductors. The technique presented permits to solve efficiently the three-dimensional problems of electrostatics for rather complicated from geometrical viewpoint systems of electrodes. Processing time is comparable with methods of integral equations. 5 refs.; 2 figs

  3. Use of frozen stress in extracting stress intensity factor distributions in three dimensional cracked body problems

    Science.gov (United States)

    Smith, C. W.

    1992-01-01

    The adaptation of the frozen stress photoelastic method to the determination of the distribution of stress intensity factors in three dimensional problems is briefly reviewed. The method is then applied to several engineering problems of practical significance.

  4. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    Science.gov (United States)

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  5. Basic problems solving for two-dimensional discrete 3 × 4 order hidden markov model

    International Nuclear Information System (INIS)

    Wang, Guo-gang; Gan, Zong-liang; Tang, Gui-jin; Cui, Zi-guan; Zhu, Xiu-chang

    2016-01-01

    A novel model is proposed to overcome the shortages of the classical hypothesis of the two-dimensional discrete hidden Markov model. In the proposed model, the state transition probability depends on not only immediate horizontal and vertical states but also on immediate diagonal state, and the observation symbol probability depends on not only current state but also on immediate horizontal, vertical and diagonal states. This paper defines the structure of the model, and studies the three basic problems of the model, including probability calculation, path backtracking and parameters estimation. By exploiting the idea that the sequences of states on rows or columns of the model can be seen as states of a one-dimensional discrete 1 × 2 order hidden Markov model, several algorithms solving the three questions are theoretically derived. Simulation results further demonstrate the performance of the algorithms. Compared with the two-dimensional discrete hidden Markov model, there are more statistical characteristics in the structure of the proposed model, therefore the proposed model theoretically can more accurately describe some practical problems.

  6. Many-body problems in high temperature superconductivity

    International Nuclear Information System (INIS)

    Yu Lu.

    1991-10-01

    In this brief review the basic experimental facts about high T c superconductors are outlined. The superconducting properties of these superconductors are not very different from those of the ordinary superconductors. However, their normal state properties cannot be described by the standard Fermi liquid (FL) theory. Our current understanding of the strongly correlated models is summarized. In one dimension these systems behave like a ''Luttinger liquid'', very much distinct from the FL. In spite of the enormous efforts made in two-dimensional studies, the question of FL vs non-FL behaviour is still open. The numerical results as well as various approximation schemes are discussed. Both the single hole problem in a quantum antiferromagnet and finite doping regime are considered. (author). 104 refs, 9 figs

  7. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    Science.gov (United States)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  8. Variational Homotopy Perturbation Method for Solving Higher Dimensional Initial Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2008-01-01

    Full Text Available We suggest and analyze a technique by combining the variational iteration method and the homotopy perturbation method. This method is called the variational homotopy perturbation method (VHPM. We use this method for solving higher dimensional initial boundary value problems with variable coefficients. The developed algorithm is quite efficient and is practically well suited for use in these problems. The proposed scheme finds the solution without any discritization, transformation, or restrictive assumptions and avoids the round-off errors. Several examples are given to check the reliability and efficiency of the proposed technique.

  9. Three-dimensional printing in pharmaceutics: promises and problems.

    Science.gov (United States)

    Yu, Deng Guang; Zhu, Li-Min; Branford-White, Christopher J; Yang, Xiang Liang

    2008-09-01

    Three-dimensional printing (3DP) is a rapid prototyping (RP) technology. Prototyping involves constructing specific layers that uses powder processing and liquid binding materials. Reports in the literature have highlighted the many advantages of the 3DP system over other processes in enhancing pharmaceutical applications, these include new methods in design, development, manufacture, and commercialization of various types of solid dosage forms. For example, 3DP technology is flexible in that it can be used in applications linked to linear drug delivery systems (DDS), colon-targeted DDS, oral fast disintegrating DDS, floating DDS, time controlled, and pulse release DDS as well as dosage form with multiphase release properties and implantable DDS. In addition 3DP can also provide solutions for resolving difficulties relating to the delivery of poorly water-soluble drugs, peptides and proteins, preparation of DDS for high toxic and potent drugs and controlled-release of multidrugs in a single dosage forms. Due to its flexible and highly reproducible manufacturing process, 3DP has some advantages over conventional compressing and other RP technologies in fabricating solid DDS. This enables 3DP to be further developed for use in pharmaceutics applications. However, there are some problems that limit the further applications of the system, such as the selections of suitable excipients and the pharmacotechnical properties of 3DP products. Further developments are therefore needed to overcome these issues where 3DP systems can be successfully combined with conventional pharmaceutics. Here we present an overview and the potential 3DP in the development of new drug delivery systems.

  10. Relativistic bound-state problem of a one-dimensional system

    International Nuclear Information System (INIS)

    Sato, T.; Niwa, T.; Ohtsubo, H.; Tamura, K.

    1991-01-01

    A Poincare-covariant description of the two-body bound-state problem in one-dimensional space is studied by using the relativistic Schrodinger equation. We derive the many-body Hamiltonian, electromagnetic current and generators of the Poincare group in the framework of one-boson exchange. Our theory satisfies Poincare algebra within the one-boson-exchange approximation. We numerically study the relativistic effects on the bound-state wavefunction and the elastic electromagnetic form factor. The Lorentz boost of the bound-state wavefunction and the two-body exchange current are shown to play an important role in guaranteeing the Lorentz invariance of the form factor. (author)

  11. Numerical solution to a multi-dimensional linear inverse heat conduction problem by a splitting-based conjugate gradient method

    International Nuclear Information System (INIS)

    Dinh Nho Hao; Nguyen Trung Thanh; Sahli, Hichem

    2008-01-01

    In this paper we consider a multi-dimensional inverse heat conduction problem with time-dependent coefficients in a box, which is well-known to be severely ill-posed, by a variational method. The gradient of the functional to be minimized is obtained by aids of an adjoint problem and the conjugate gradient method with a stopping rule is then applied to this ill-posed optimization problem. To enhance the stability and the accuracy of the numerical solution to the problem we apply this scheme to the discretized inverse problem rather than to the continuous one. The difficulties with large dimensions of discretized problems are overcome by a splitting method which only requires the solution of easy-to-solve one-dimensional problems. The numerical results provided by our method are very good and the techniques seem to be very promising.

  12. Cooperative simulation of lithography and topography for three-dimensional high-aspect-ratio etching

    Science.gov (United States)

    Ichikawa, Takashi; Yagisawa, Takashi; Furukawa, Shinichi; Taguchi, Takafumi; Nojima, Shigeki; Murakami, Sadatoshi; Tamaoki, Naoki

    2018-06-01

    A topography simulation of high-aspect-ratio etching considering transports of ions and neutrals is performed, and the mechanism of reactive ion etching (RIE) residues in three-dimensional corner patterns is revealed. Limited ion flux and CF2 diffusion from the wide space of the corner is found to have an effect on the RIE residues. Cooperative simulation of lithography and topography is used to solve the RIE residue problem.

  13. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  14. Progress in high-dimensional percolation and random graphs

    CERN Document Server

    Heydenreich, Markus

    2017-01-01

    This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic.  The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation.  Part III, consist...

  15. A study of the one dimensional total generalised variation regularisation problem

    KAUST Repository

    Papafitsoros, Konstantinos

    2015-03-01

    © 2015 American Institute of Mathematical Sciences. In this paper we study the one dimensional second order total generalised variation regularisation (TGV) problem with L2 data fitting term. We examine the properties of this model and we calculate exact solutions using simple piecewise affine functions as data terms. We investigate how these solutions behave with respect to the TGV parameters and we verify our results using numerical experiments.

  16. A study of the one dimensional total generalised variation regularisation problem

    KAUST Repository

    Papafitsoros, Konstantinos; Bredies, Kristian

    2015-01-01

    © 2015 American Institute of Mathematical Sciences. In this paper we study the one dimensional second order total generalised variation regularisation (TGV) problem with L2 data fitting term. We examine the properties of this model and we calculate exact solutions using simple piecewise affine functions as data terms. We investigate how these solutions behave with respect to the TGV parameters and we verify our results using numerical experiments.

  17. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    Science.gov (United States)

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  18. High dimensional entanglement

    CSIR Research Space (South Africa)

    Mc

    2012-07-01

    Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...

  19. Characterization of differentially expressed genes using high-dimensional co-expression networks

    DEFF Research Database (Denmark)

    Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.

    2010-01-01

    We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...

  20. Localization of the solution of a one-dimensional one-phase Stefan problem

    OpenAIRE

    Cortazar, C.; Elgueta, M.; Primicerio, M.

    1996-01-01

    Studiamo la localizzazione, l'insieme dei punti di blow up ed alcuni aspetti della velocità di propagazione della frontiera libera di soluzioni di un problema di Stefan unidimensionale ad una fase. We study localization, the set of blow up points and some aspects of the speed of the free boundary of solutions of a one-dimensional, one-phase Stefan problem.

  1. Boundary element methods applied to two-dimensional neutron diffusion problems

    International Nuclear Information System (INIS)

    Itagaki, Masafumi

    1985-01-01

    The Boundary element method (BEM) has been applied to two-dimensional neutron diffusion problems. The boundary integral equation and its discretized form have been derived. Some numerical techniques have been developed, which can be applied to critical and fixed-source problems including multi-region ones. Two types of test programs have been developed according to whether the 'zero-determinant search' or the 'source iteration' technique is adopted for criticality search. Both programs require only the fluxes and currents on boundaries as the unknown variables. The former allows a reduction in computing time and memory in comparison with the finite element method (FEM). The latter is not always efficient in terms of computing time due to the domain integral related to the inhomogeneous source term; however, this domain integral can be replaced by the equivalent boundary integral for a region with a non-multiplying medium or with a uniform source, resulting in a significant reduction in computing time. The BEM, as well as the FEM, is well suited for solving irregular geometrical problems for which the finite difference method (FDM) is unsuited. The BEM also solves problems with infinite domains, which cannot be solved by the ordinary FEM and FDM. Some simple test calculations are made to compare the BEM with the FEM and FDM, and discussions are made concerning the relative merits of the BEM and problems requiring future solution. (author)

  2. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  3. Continuity of the direct and inverse problems in one-dimensional scattering theory and numerical solution of the inverse problem

    International Nuclear Information System (INIS)

    Moura, C.A. de.

    1976-09-01

    We propose an algorithm for computing the potential V(x) associated to the one-dimensional Schroedinger operator E identical to - d 2 /dx 2 + V(x) -infinite < x< infinite from knowledge of the S.matrix, more exactly, of one of the reelection coefficients. The convergence of the algorithm is guaranteed by the stability results obtained for both the direct and inverse problems

  4. Clustering high dimensional data using RIA

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.

  5. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  6. The nodal discrete-ordinate transport calculation of anisotropy scattering problem in three-dimensional cartesian geometry

    International Nuclear Information System (INIS)

    Wu Hongchun; Xie Zhongsheng; Zhu Xuehua

    1994-01-01

    The nodal discrete-ordinate transport calculating model of anisotropy scattering problem in three-dimensional cartesian geometry is given. The computing code NOTRAN/3D has been encoded and the satisfied conclusion is gained

  7. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  8. Continuous Energy, Multi-Dimensional Transport Calculations for Problem Dependent Resonance Self-Shielding

    International Nuclear Information System (INIS)

    Downar, T.

    2009-01-01

    The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system. The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multidimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system. Specifically, the methods here utilize the existing continuous energy SCALE5 module, CENTRM, and the multi-dimensional discrete ordinates solver, NEWT to develop a new code, CENTRM( ) NEWT. The work here addresses specific theoretical limitations in existing CENTRM resonance treatment, as well as investigates advanced numerical and parallel computing algorithms for CENTRM and NEWT in order to reduce the computational burden. The result of the work here will be a new computer code capable of performing problem dependent self-shielding analysis for both existing and proposed GENIV fuel designs. The objective of the work was to have an immediate impact on the safety analysis of existing reactors through improvements in the calculation of fuel temperature effects, as well as on the analysis of more sophisticated GENIV/NGNP systems through improvements in the depletion/transmutation of actinides for Advanced Fuel Cycle Initiatives.

  9. Application of space-angle synthesis to two-dimensional neutral-particle transport problems of weapon physics

    International Nuclear Information System (INIS)

    Roberds, R.M.

    1975-01-01

    A space-angle synthesis (SAS) method has been developed for treating the steady-state, two-dimensional transport of neutrons and gamma rays from a point source of simulated nuclear weapon radiation in air. The method was validated by applying it to the problem of neutron transport from a point source in air over a ground interface, and then comparing the results to those obtained by DOT, a state-of-the-art, discrete-ordinates code. In the SAS method, the energy dependence of the Boltzmann transport equation was treated in the standard multigroup manner. The angular dependence was treated by expanding the flux in specially tailored trial functions and applying the method of weighted residuals which analytically integrated the transport equation over all angles. The weighted-residual approach was analogous to the conventional spherical-harmonics (P/sub N/) method with the exception that the tailored expansion allowed for more rapid convergence than a spherical-harmonics P 1 expansion and resulted in a greater degree of accuracy. The trial functions used in the expansion were odd and even combinations of selected trial solutions, the trial solutions being shaped ellipsoids which approximated the angular distribution of the neutron flux in one-dimensional space. The parameters which described the shape of the ellipsoid varied with energy group and the spatial medium, only, and were obtained from a one-dimensional discrete-ordinates calculation. Thus, approximate transport solutions were made available for all two-dimensional problems of a certain class by using tabulated parameters obtained from a single, one-dimensional calculation

  10. Uniqueness in some higher order elliptic boundary value problems in n dimensional domains

    Directory of Open Access Journals (Sweden)

    C.-P. Danet

    2011-07-01

    Full Text Available We develop maximum principles for several P functions which are defined on solutions to equations of fourth and sixth order (including a equation which arises in plate theory and bending of cylindrical shells. As a consequence, we obtain uniqueness results for fourth and sixth order boundary value problems in arbitrary n dimensional domains.

  11. Explicit formulation of a nodal transport method for discrete ordinates calculations in two-dimensional fixed-source problems

    Energy Technology Data Exchange (ETDEWEB)

    Tres, Anderson [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada; Becker Picoloto, Camila [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Prolo Filho, Joao Francisco [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst de Matematica, Estatistica e Fisica; Dias da Cunha, Rudnei; Basso Barichello, Liliane [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst de Matematica

    2014-04-15

    In this work a study of two-dimensional fixed-source neutron transport problems, in Cartesian geometry, is reported. The approach reduces the complexity of the multidimensional problem using a combination of nodal schemes and the Analytical Discrete Ordinates Method (ADO). The unknown leakage terms on the boundaries that appear from the use of the derivation of the nodal scheme are incorporated to the problem source term, such as to couple the one-dimensional integrated solutions, made explicit in terms of the x and y spatial variables. The formulation leads to a considerable reduction of the order of the associated eigenvalue problems when combined with the usual symmetric quadratures, thereby providing solutions that have a higher degree of computational efficiency. Reflective-type boundary conditions are introduced to represent the domain on a simpler form than that previously considered in connection with the ADO method. Numerical results obtained with the technique are provided and compared to those present in the literature. (orig.)

  12. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando

    2017-01-01

    aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel

  13. Electrons, pseudoparticles, and quasiparticles in the one-dimensional many-electron problem

    International Nuclear Information System (INIS)

    Carmelo, J.M.; Castro Neto, A.H.

    1996-01-01

    We generalize the concept of quasiparticle for one-dimensional (1D) interacting electronic systems. The ↑ and ↓ quasiparticles recombine the pseudoparticle colors c and s (charge and spin at zero-magnetic field) and are constituted by one many-pseudoparticle topological-momentum shift and one or two pseudoparticles. These excitations cannot be separated. We consider the case of the Hubbard chain. We show that the low-energy electron-quasiparticle transformation has a singular character which justifies the perturbative and nonperturbative nature of the quantum problem in the pseudoparticle and electronic basis, respectively. This follows from the absence of zero-energy electron-quasiparticle overlap in 1D. The existence of Fermi-surface quasiparticles both in 1D and three dimensional (3D) many-electron systems suggests their existence in quantum liquids in dimensions 1 1 or whether it becomes finite as soon as we leave 1D remains an unsolved question. copyright 1996 The American Physical Society

  14. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  15. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    Science.gov (United States)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  16. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus

    2013-11-12

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  17. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus; Dahmen, Wolfgang; DeVore, Ronald; Grasedyck, Lars

    2013-01-01

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called 'curse of dimensionality'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  18. On mixed derivatives type high dimensional multi-term fractional partial differential equations approximate solutions

    Science.gov (United States)

    Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad

    2017-01-01

    In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.

  19. One-dimensional central-force problem, including radiation reaction

    International Nuclear Information System (INIS)

    Kasher, J.C.

    1976-01-01

    Two equal masses of equal charge magnitude (either attractive or repulsive) are held a certain distance apart for their entire past history. AT t = 0 one of them is either started from rest or given an initial velocity toward or away from the other charge. When the Dirac radiation-reaction force is included in the force equation, our Taylor-series numerical calculations lead to two types of nonphysical results for both the attractive and repulsive cases. In the attractive case, the moving charge either stops and moves back out to infinity, or violates energy conservation as it nears collision with the fixed charge. For the repulsive charges, the moving particle either eventually approaches and collides with the fixed one, or violates energy conservation as it goes out to infinity. These results lead us to conclude that the Lorentz-Dirac equation is not valid for the one-dimensional central-force problem

  20. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  1. Inference for High-dimensional Differential Correlation Matrices.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  2. Invert 1.0: A program for solving the nonlinear inverse heat conduction problem for one-dimensional solids

    International Nuclear Information System (INIS)

    Snider, D.M.

    1981-02-01

    INVERT 1.0 is a digital computer program written in FORTRAN IV which calculates the surface heat flux of a one-dimensional solid using an interior-measured temperature and a physical description of the solid. By using two interior-measured temperatures, INVERT 1.0 can provide a solution for the heat flux at two surfaces, the heat flux at a boundary and the time dependent power, or the heat flux at a boundary and the time varying thermal conductivity of a material composing the solid. The analytical solution to inversion problem is described for the one-dimensional cylinder, sphere, or rectangular slab. The program structure, input instructions, and sample problems demonstrating the accuracy of the solution technique are included

  3. Matrix correlations for high-dimensional data: The modified RV-coefficient

    NARCIS (Netherlands)

    Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van

    2009-01-01

    Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they

  4. Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning

    Science.gov (United States)

    Sagun, Levent

    This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would

  5. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)

    2015-01-15

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  6. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    Science.gov (United States)

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  7. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data

  8. Verification of a three-dimensional neutronics model based on multi-point kinetics equations for transient problems

    Energy Technology Data Exchange (ETDEWEB)

    Park, Kyung Seok; Kim, Hyun Dae; Yeom, Choong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    A computer code for solving the three-dimensional reactor neutronic transient problems utilizing multi-point reactor kinetics equations recently developed has been developed. For evaluating its applicability, the code has been tested with typical 3-D LWR and CANDU reactor transient problems. The performance of the method and code has been compared with the results by fine and coarse meshes computer codes employing the direct methods.

  9. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Hongchao Song

    2017-01-01

    Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  10. High-dimensional quantum cloning and applications to quantum hacking.

    Science.gov (United States)

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim

    2017-02-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.

  11. Analytical Modeling of Transient Process In Terms of One-Dimensional Problem of Dynamics With Kinematic Action

    Directory of Open Access Journals (Sweden)

    Kravets Victor V.

    2016-05-01

    Full Text Available One-dimensional dynamic design of a component characterized by inertia coefficient, elastic coefficient, and coefficient of energy dispersion. The component is affected by external action in the form of time-independent initial kinematic disturbances and varying ones. Mathematical model of component dynamics as well as a new form of analytical representation of transient in terms of one-dimensional problem of kinematic effect is provided. Dynamic design of a component is being carried out according to a theory of modal control.

  12. Central subspace dimensionality reduction using covariance operators.

    Science.gov (United States)

    Kim, Minyoung; Pavlovic, Vladimir

    2011-04-01

    We consider the task of dimensionality reduction informed by real-valued multivariate labels. The problem is often treated as Dimensionality Reduction for Regression (DRR), whose goal is to find a low-dimensional representation, the central subspace, of the input data that preserves the statistical correlation with the targets. A class of DRR methods exploits the notion of inverse regression (IR) to discover central subspaces. Whereas most existing IR techniques rely on explicit output space slicing, we propose a novel method called the Covariance Operator Inverse Regression (COIR) that generalizes IR to nonlinear input/output spaces without explicit target slicing. COIR's unique properties make DRR applicable to problem domains with high-dimensional output data corrupted by potentially significant amounts of noise. Unlike recent kernel dimensionality reduction methods that employ iterative nonconvex optimization, COIR yields a closed-form solution. We also establish the link between COIR, other DRR techniques, and popular supervised dimensionality reduction methods, including canonical correlation analysis and linear discriminant analysis. We then extend COIR to semi-supervised settings where many of the input points lack their labels. We demonstrate the benefits of COIR on several important regression problems in both fully supervised and semi-supervised settings.

  13. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....

  14. HSM: Heterogeneous Subspace Mining in High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Seidl, Thomas

    2009-01-01

    Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...

  15. A Simple Proof of the Theorem Concerning Optimality in a One-Dimensional Ergodic Control Problem

    International Nuclear Information System (INIS)

    Fujita, Y.

    2000-01-01

    We give a simple proof of the theorem concerning optimality in a one-dimensional ergodic control problem. We characterize the optimal control in the class of all Markov controls. Our proof is probabilistic and does not need to solve the corresponding Bellman equation. This simplifies the proof

  16. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  17. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  18. Basic problems and solution methods for two-dimensional continuous 3 × 3 order hidden Markov model

    International Nuclear Information System (INIS)

    Wang, Guo-gang; Tang, Gui-jin; Gan, Zong-liang; Cui, Zi-guan; Zhu, Xiu-chang

    2016-01-01

    A novel model referred to as two-dimensional continuous 3 × 3 order hidden Markov model is put forward to avoid the disadvantages of the classical hypothesis of two-dimensional continuous hidden Markov model. This paper presents three equivalent definitions of the model, in which the state transition probability relies on not only immediate horizontal and vertical states but also immediate diagonal state, and in which the probability density of the observation relies on not only current state but also immediate horizontal and vertical states. The paper focuses on the three basic problems of the model, namely probability density calculation, parameters estimation and path backtracking. Some algorithms solving the questions are theoretically derived, by exploiting the idea that the sequences of states on rows or columns of the model can be viewed as states of a one-dimensional continuous 1 × 2 order hidden Markov model. Simulation results further demonstrate the performance of the algorithms. Because there are more statistical characteristics in the structure of the proposed new model, it can more accurately describe some practical problems, as compared to two-dimensional continuous hidden Markov model.

  19. AN EFFECTIVE MULTI-CLUSTERING ANONYMIZATION APPROACH USING DISCRETE COMPONENT TASK FOR NON-BINARY HIGH DIMENSIONAL DATA SPACES

    Directory of Open Access Journals (Sweden)

    L.V. Arun Shalin

    2016-01-01

    Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization

  20. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  1. Generalized coherent states for the Coulomb problem in one dimension

    International Nuclear Information System (INIS)

    Nouri, S.

    2002-01-01

    A set of generalized coherent states for the one-dimensional Coulomb problem in coordinate representation is constructed. At first, we obtain a mapping for proper transformation of the one-dimensional Coulomb problem into a nonrotating four-dimensional isotropic harmonic oscillator in the hyperspherical space, and the generalized coherent states for the one-dimensional Coulomb problem is then obtained in exact closed form. This exactly soluble model can provide an adequate means for a quantum coherency description of the Coulomb problem in one dimension, sample for coherent aspects of the exciton model in one-dimension example in high-temperature superconductivity, semiconductors, and polymers. Also, it can be useful for investigating the coherent scattering of the Coulomb particles in one dimension

  2. Comparison of three-dimensional ocean general circulation models on a benchmark problem

    International Nuclear Information System (INIS)

    Chartier, M.

    1990-12-01

    A french and an american Ocean General Circulation Models for deep-sea disposal of radioactive wastes are compared on a benchmark test problem. Both models are three-dimensional. They solve the hydrostatic primitive equations of the ocean with two different finite difference techniques. Results show that the dynamics simulated by both models are consistent. Several methods for the running of a model from a known state are tested in the French model: the diagnostic method, the prognostic method, the acceleration of convergence and the robust-diagnostic method

  3. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  4. Hydraulic performance numerical simulation of high specific speed mixed-flow pump based on quasi three-dimensional hydraulic design method

    International Nuclear Information System (INIS)

    Zhang, Y X; Su, M; Hou, H C; Song, P F

    2013-01-01

    This research adopts the quasi three-dimensional hydraulic design method for the impeller of high specific speed mixed-flow pump to achieve the purpose of verifying the hydraulic design method and improving hydraulic performance. Based on the two families of stream surface theory, the direct problem is completed when the meridional flow field of impeller is obtained by employing iterative calculation to settle the continuity and momentum equation of fluid. The inverse problem is completed by using the meridional flow field calculated in the direct problem. After several iterations of the direct and inverse problem, the shape of impeller and flow field information can be obtained finally when the result of iteration satisfies the convergent criteria. Subsequently the internal flow field of the designed pump are simulated by using RANS equations with RNG k-ε two-equation turbulence model. The static pressure and streamline distributions at the symmetrical cross-section, the vector velocity distribution around blades and the reflux phenomenon are analyzed. The numerical results show that the quasi three-dimensional hydraulic design method for high specific speed mixed-flow pump improves the hydraulic performance and reveal main characteristics of the internal flow of mixed-flow pump as well as provide basis for judging the rationality of the hydraulic design, improvement and optimization of hydraulic model

  5. TESTING HIGH-DIMENSIONAL COVARIANCE MATRICES, WITH APPLICATION TO DETECTING SCHIZOPHRENIA RISK GENES.

    Science.gov (United States)

    Zhu, Lingxue; Lei, Jing; Devlin, Bernie; Roeder, Kathryn

    2017-09-01

    Scientists routinely compare gene expression levels in cases versus controls in part to determine genes associated with a disease. Similarly, detecting case-control differences in co-expression among genes can be critical to understanding complex human diseases; however statistical methods have been limited by the high dimensional nature of this problem. In this paper, we construct a sparse-Leading-Eigenvalue-Driven (sLED) test for comparing two high-dimensional covariance matrices. By focusing on the spectrum of the differential matrix, sLED provides a novel perspective that accommodates what we assume to be common, namely sparse and weak signals in gene expression data, and it is closely related with Sparse Principal Component Analysis. We prove that sLED achieves full power asymptotically under mild assumptions, and simulation studies verify that it outperforms other existing procedures under many biologically plausible scenarios. Applying sLED to the largest gene-expression dataset obtained from post-mortem brain tissue from Schizophrenia patients and controls, we provide a novel list of genes implicated in Schizophrenia and reveal intriguing patterns in gene co-expression change for Schizophrenia subjects. We also illustrate that sLED can be generalized to compare other gene-gene "relationship" matrices that are of practical interest, such as the weighted adjacency matrices.

  6. Time-stepping approach for solving upper-bound problems: Application to two-dimensional Rayleigh-Bénard convection

    Science.gov (United States)

    Wen, Baole; Chini, Gregory P.; Kerswell, Rich R.; Doering, Charles R.

    2015-10-01

    An alternative computational procedure for numerically solving a class of variational problems arising from rigorous upper-bound analysis of forced-dissipative infinite-dimensional nonlinear dynamical systems, including the Navier-Stokes and Oberbeck-Boussinesq equations, is analyzed and applied to Rayleigh-Bénard convection. A proof that the only steady state to which this numerical algorithm can converge is the required global optimal of the relevant variational problem is given for three canonical flow configurations. In contrast with most other numerical schemes for computing the optimal bounds on transported quantities (e.g., heat or momentum) within the "background field" variational framework, which employ variants of Newton's method and hence require very accurate initial iterates, the new computational method is easy to implement and, crucially, does not require numerical continuation. The algorithm is used to determine the optimal background-method bound on the heat transport enhancement factor, i.e., the Nusselt number (Nu), as a function of the Rayleigh number (Ra), Prandtl number (Pr), and domain aspect ratio L in two-dimensional Rayleigh-Bénard convection between stress-free isothermal boundaries (Rayleigh's original 1916 model of convection). The result of the computation is significant because analyses, laboratory experiments, and numerical simulations have suggested a range of exponents α and β in the presumed Nu˜PrαRaβ scaling relation. The computations clearly show that for Ra≤1010 at fixed L =2 √{2 },Nu≤0.106 Pr0Ra5/12 , which indicates that molecular transport cannot generally be neglected in the "ultimate" high-Ra regime.

  7. hree-Dimensional Finite Element Simulation of the Buried Pipe Problem in Geogrid Reinforced Soil

    Directory of Open Access Journals (Sweden)

    Mohammed Yousif Fattah

    2016-05-01

    Full Text Available Buried pipeline systems are commonly used to transport water, sewage, natural oil/gas and other materials. The beneficial of using geogrid reinforcement is to increase the bearing capacity of the soil and decrease the load transfer to the underground structures. This paper deals with simulation of the buried pipe problem numerically by finite elements method using the newest version of PLAXIS-3D software. Rajkumar and Ilamaruthi's study, 2008 has been selected to be reanalyzed as 3D problem because it is containing all the properties needed by the program such as the modulus of elasticity, Poisson's ratio, angle of internal friction. It was found that the results of vertical crown deflection for the model without geogrid obtained from PLAXIS-3D are higher than those obtained by two-dimensional plane strain by about 21.4% while this percent becomes 12.1 for the model with geogrid, but in general, both have the same trend. The two dimensional finite elements analysis predictions of pipe-soil system behavior indicate an almost linear displacement of pipe deflection with applied pressure while 3-D analysis exhibited non-linear behavior especially at higher loads.

  8. Two numerical methods for the solution of two-dimensional eddy current problems

    International Nuclear Information System (INIS)

    Biddlecombe, C.S.

    1978-07-01

    A general method for the solution of eddy current problems in two dimensions - one component of current density and two of magnetic field, is reported. After examining analytical methods two numerical methods are presented. Both solve the two dimensional, low frequency limit of Maxwell's equations for transient eddy currents in conducting material, which may be permeable, in the presence of other non-conducting permeable material. Both solutions are expressed in terms of the magnetic vector potential. The first is an integral equation method, using zero order elements in the discretisation of the unknown source regions. The other is a differential equation method, using a first order finite element mesh, and the Galerkin weighted residual procedure. The resulting equations are solved as initial-value problems. Results from programs based on each method are presented showing the power and limitations of the methods and the range of problems solvable. The methods are compared and recommendations are made for choosing between them. Suggestions are made for improving both methods, involving boundary integral techniques. (author)

  9. Three-body problem in d-dimensional space: Ground state, (quasi)-exact-solvability

    Science.gov (United States)

    Turbiner, Alexander V.; Miller, Willard; Escobar-Ruiz, M. A.

    2018-02-01

    As a straightforward generalization and extension of our previous paper [A. V. Turbiner et al., "Three-body problem in 3D space: Ground state, (quasi)-exact-solvability," J. Phys. A: Math. Theor. 50, 215201 (2017)], we study the aspects of the quantum and classical dynamics of a 3-body system with equal masses, each body with d degrees of freedom, with interaction depending only on mutual (relative) distances. The study is restricted to solutions in the space of relative motion which are functions of mutual (relative) distances only. It is shown that the ground state (and some other states) in the quantum case and the planar trajectories (which are in the interaction plane) in the classical case are of this type. The quantum (and classical) Hamiltonian for which these states are eigenfunctions is derived. It corresponds to a three-dimensional quantum particle moving in a curved space with special d-dimension-independent metric in a certain d-dependent singular potential, while at d = 1, it elegantly degenerates to a two-dimensional particle moving in flat space. It admits a description in terms of pure geometrical characteristics of the interaction triangle which is defined by the three relative distances. The kinetic energy of the system is d-independent; it has a hidden sl(4, R) Lie (Poisson) algebra structure, alternatively, the hidden algebra h(3) typical for the H3 Calogero model as in the d = 3 case. We find an exactly solvable three-body S3-permutationally invariant, generalized harmonic oscillator-type potential as well as a quasi-exactly solvable three-body sextic polynomial type potential with singular terms. For both models, an extra first order integral exists. For d = 1, the whole family of 3-body (two-dimensional) Calogero-Moser-Sutherland systems as well as the Tremblay-Turbiner-Winternitz model is reproduced. It is shown that a straightforward generalization of the 3-body (rational) Calogero model to d > 1 leads to two primitive quasi

  10. Sufficient condition for existence of solutions for higher-order resonance boundary value problem with one-dimensional p-Laplacian

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2007-10-01

    Full Text Available By using coincidence degree theory of Mawhin, existence results for some higher order resonance multipoint boundary value problems with one dimensional p-Laplacian operator are obtained.

  11. Highly indefinite multigrid for eigenvalue problems

    Energy Technology Data Exchange (ETDEWEB)

    Borges, L.; Oliveira, S.

    1996-12-31

    Eigenvalue problems are extremely important in understanding dynamic processes such as vibrations and control systems. Large scale eigenvalue problems can be very difficult to solve, especially if a large number of eigenvalues and the corresponding eigenvectors need to be computed. For solving this problem a multigrid preconditioned algorithm is presented in {open_quotes}The Davidson Algorithm, preconditioning and misconvergence{close_quotes}. Another approach for solving eigenvalue problems is by developing efficient solutions for highly indefinite problems. In this paper we concentrate on the use of new highly indefinite multigrid algorithms for the eigenvalue problem.

  12. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  13. Analysis of chaos in high-dimensional wind power system.

    Science.gov (United States)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  14. Inverse radiative transfer problems in two-dimensional heterogeneous media; Problemas inversos em transferencia radiativa em meios heterogeneos bidimensionais

    Energy Technology Data Exchange (ETDEWEB)

    Tito, Mariella Janette Berrocal

    2001-01-01

    The analysis of inverse problems in participating media where emission, absorption and scattering take place has several relevant applications in engineering and medicine. Some of the techniques developed for the solution of inverse problems have as a first step the solution of the direct problem. In this work the discrete ordinates method has been used for the solution of the linearized Boltzmann equation in two dimensional cartesian geometry. The Levenberg - Marquardt method has been used for the solution of the inverse problem of internal source and absorption and scattering coefficient estimation. (author)

  15. Three-dimensional dynamic rupture simulation with a high-order discontinuous Galerkin method on unstructured tetrahedral meshes

    KAUST Repository

    Pelties, Christian

    2012-02-18

    Accurate and efficient numerical methods to simulate dynamic earthquake rupture and wave propagation in complex media and complex fault geometries are needed to address fundamental questions in earthquake dynamics, to integrate seismic and geodetic data into emerging approaches for dynamic source inversion, and to generate realistic physics-based earthquake scenarios for hazard assessment. Modeling of spontaneous earthquake rupture and seismic wave propagation by a high-order discontinuous Galerkin (DG) method combined with an arbitrarily high-order derivatives (ADER) time integration method was introduced in two dimensions by de la Puente et al. (2009). The ADER-DG method enables high accuracy in space and time and discretization by unstructured meshes. Here we extend this method to three-dimensional dynamic rupture problems. The high geometrical flexibility provided by the usage of tetrahedral elements and the lack of spurious mesh reflections in the ADER-DG method allows the refinement of the mesh close to the fault to model the rupture dynamics adequately while concentrating computational resources only where needed. Moreover, ADER-DG does not generate spurious high-frequency perturbations on the fault and hence does not require artificial Kelvin-Voigt damping. We verify our three-dimensional implementation by comparing results of the SCEC TPV3 test problem with two well-established numerical methods, finite differences, and spectral boundary integral. Furthermore, a convergence study is presented to demonstrate the systematic consistency of the method. To illustrate the capabilities of the high-order accurate ADER-DG scheme on unstructured meshes, we simulate an earthquake scenario, inspired by the 1992 Landers earthquake, that includes curved faults, fault branches, and surface topography. Copyright 2012 by the American Geophysical Union.

  16. High-dimensional data in economics and their (robust) analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  17. Symmetry analysis and exact solutions of one class of (1+3)-dimensional boundary-value problems of the Stefan type

    OpenAIRE

    Kovalenko, S. S.

    2014-01-01

    We present the group classification of one class of (1+3)-dimensional nonlinear boundary-value problems of the Stefan type that simulate the processes of melting and evaporation of metals. The results obtained are used for the construction of the exact solution of one boundary-value problem from the class under study.

  18. Developing cross entropy genetic algorithm for solving Two-Dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP)

    Science.gov (United States)

    Paramestha, D. L.; Santosa, B.

    2018-04-01

    Two-dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP) is a combination of Heterogeneous Fleet VRP and a packing problem well-known as Two-Dimensional Bin Packing Problem (BPP). 2L-HFVRP is a Heterogeneous Fleet VRP in which these costumer demands are formed by a set of two-dimensional rectangular weighted item. These demands must be served by a heterogeneous fleet of vehicles with a fix and variable cost from the depot. The objective function 2L-HFVRP is to minimize the total transportation cost. All formed routes must be consistent with the capacity and loading process of the vehicle. Sequential and unrestricted scenarios are considered in this paper. We propose a metaheuristic which is a combination of the Genetic Algorithm (GA) and the Cross Entropy (CE) named Cross Entropy Genetic Algorithm (CEGA) to solve the 2L-HFVRP. The mutation concept on GA is used to speed up the algorithm CE to find the optimal solution. The mutation mechanism was based on local improvement (2-opt, 1-1 Exchange, and 1-0 Exchange). The probability transition matrix mechanism on CE is used to avoid getting stuck in the local optimum. The effectiveness of CEGA was tested on benchmark instance based 2L-HFVRP. The result of experiments shows a competitive result compared with the other algorithm.

  19. Multisymplectic Structure-Preserving in Simple Finite Element Method in High Dimensional Case

    Institute of Scientific and Technical Information of China (English)

    BAI Yong-Qiang; LIU Zhen; PEI Ming; ZHENG Zhu-Jun

    2003-01-01

    In this paper, we study a finite element scheme of some semi-linear elliptic boundary value problems inhigh-dimensional space. With uniform mesh, we find that, the numerical scheme derived from finite element method cankeep a preserved multisymplectic structure.

  20. Analytic Approximations to the Free Boundary and Multi-dimensional Problems in Financial Derivatives Pricing

    Science.gov (United States)

    Lau, Chun Sing

    This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in

  1. Use of exact albedo conditions in numerical methods for one-dimensional one-speed discrete ordinates eigenvalue problems

    International Nuclear Information System (INIS)

    Abreu, M.P. de

    1994-01-01

    The use of exact albedo boundary conditions in numerical methods applied to one-dimensional one-speed discrete ordinates (S n ) eigenvalue problems for nuclear reactor global calculations is described. An albedo operator that treats the reflector region around a nuclear reactor core implicitly is described and exactly was derived. To illustrate the method's efficiency and accuracy, it was used conventional linear diamond method with the albedo option to solve typical model problems. (author)

  2. Preparation of wholemount mouse intestine for high-resolution three-dimensional imaging using two-photon microscopy.

    Science.gov (United States)

    Appleton, P L; Quyn, A J; Swift, S; Näthke, I

    2009-05-01

    Visualizing overall tissue architecture in three dimensions is fundamental for validating and integrating biochemical, cell biological and visual data from less complex systems such as cultured cells. Here, we describe a method to generate high-resolution three-dimensional image data of intact mouse gut tissue. Regions of highest interest lie between 50 and 200 mum within this tissue. The quality and usefulness of three-dimensional image data of tissue with such depth is limited owing to problems associated with scattered light, photobleaching and spherical aberration. Furthermore, the highest-quality oil-immersion lenses are designed to work at a maximum distance of image at high-resolution deep within tissue. We show that manipulating the refractive index of the mounting media and decreasing sample opacity greatly improves image quality such that the limiting factor for a standard, inverted multi-photon microscope is determined by the working distance of the objective as opposed to detectable fluorescence. This method negates the need for mechanical sectioning of tissue and enables the routine generation of high-quality, quantitative image data that can significantly advance our understanding of tissue architecture and physiology.

  3. Three-Dimensional Triplet Tracking for LHC and Future High Rate Experiments

    CERN Document Server

    Schöning, Andre

    2014-10-20

    The hit combinatorial problem is a main challenge for track reconstruction and triggering at high rate experiments. At hadron colliders the dominant fraction of hits is due to low momentum tracks for which multiple scattering (MS) effects dominate the hit resolution. MS is also the dominating source for hit confusion and track uncertainties in low energy precision experiments. In all such environments, where MS dominates, track reconstruction and fitting can be largely simplified by using three-dimensional (3D) hit-triplets as provided by pixel detectors. This simplification is possible since track uncertainties are solely determined by MS if high precision spatial information is provided. Fitting of hit-triplets is especially simple for tracking detectors in solenoidal magnetic fields. The over-constrained 3D-triplet method provides a complete set of track parameters and is robust against fake hit combinations. The triplet method is ideally suited for pixel detectors where hits can be treated as 3D-space poi...

  4. High-dimensional Data in Economics and their (Robust) Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  5. Impact of high-frequency pumping on anomalous finite-size effects in three-dimensional topological insulators

    Science.gov (United States)

    Pervishko, Anastasiia A.; Yudin, Dmitry; Shelykh, Ivan A.

    2018-02-01

    Lowering of the thickness of a thin-film three-dimensional topological insulator down to a few nanometers results in the gap opening in the spectrum of topologically protected two-dimensional surface states. This phenomenon, which is referred to as the anomalous finite-size effect, originates from hybridization between the states propagating along the opposite boundaries. In this work, we consider a bismuth-based topological insulator and show how the coupling to an intense high-frequency linearly polarized pumping can further be used to manipulate the value of a gap. We address this effect within recently proposed Brillouin-Wigner perturbation theory that allows us to map a time-dependent problem into a stationary one. Our analysis reveals that both the gap and the components of the group velocity of the surface states can be tuned in a controllable fashion by adjusting the intensity of the driving field within an experimentally accessible range and demonstrate the effect of light-induced band inversion in the spectrum of the surface states for high enough values of the pump.

  6. One-dimensional singular problems involving the p-Laplacian and nonlinearities indefinite in sign

    OpenAIRE

    Kaufmann, Uriel; Medri, Iván

    2015-01-01

    Let $\\Omega$ be a bounded open interval, let $p>1$ and $\\gamma>0$, and let $m:\\Omega\\rightarrow\\mathbb{R}$ be a function that may change sign in $\\Omega $. In this article we study the existence and nonexistence of positive solutions for one-dimensional singular problems of the form $-(\\left\\vert u^{\\prime}\\right\\vert ^{p-2}u^{\\prime})^{\\prime}=m\\left( x\\right) u^{-\\gamma}$ in $\\Omega$, $u=0$ on $\\partial\\Omega$. As a consequence we also derive existence results for other related nonlinearities.

  7. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  8. Surface harmonics method for two-dimensional time-dependent neutron transport problems of square-lattice nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Boyarinov, V. F.; Kondrushin, A. E.; Fomichenko, P. A. [National Research Centre Kurchatov Institute, Kurchatov Sq. 1, Moscow (Russian Federation)

    2013-07-01

    Time-dependent equations of the Surface Harmonics Method (SHM) have been derived from the time-dependent neutron transport equation with explicit representation of delayed neutrons for solving the two-dimensional time-dependent problems. These equations have been realized in the SUHAM-TD code. The TWIGL benchmark problem has been used for verification of the SUHAM-TD code. The results of the study showed that computational costs required to achieve necessary accuracy of the solution can be an order of magnitude less than with the use of the conventional finite difference method (FDM). (authors)

  9. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  10. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  11. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  12. Heuristic geometric ''eigenvalue universality'' in a one-dimensional neutron transport problem with anisotropic scattering

    International Nuclear Information System (INIS)

    Goncalves, G.A.; Vilhena, M.T. de; Bodmann, B.E.J.

    2010-01-01

    In the present work we propose a heuristic construction of a transport equation for neutrons with anisotropic scattering considering only the radial cylinder dimension. The eigenvalues of the solutions of the equation correspond to the positive values for the one dimensional case. The central idea of the procedure is the application of the S N method for the discretisation of the angular variable followed by the application of the zero order Hankel transformation. The basis the construction of the scattering terms in form of an integro-differential equation for stationary transport resides in the hypothesis that the eigenvalues that compose the elementary solutions are independent of geometry for a homogeneous medium. We compare the solutions for the cartesian one dimensional problem for an infinite cylinder with azimuthal symmetry and linear anisotropic scattering for two cases. (orig.)

  13. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    -output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.

  14. Some problems of dynamical systems on three dimensional manifolds

    International Nuclear Information System (INIS)

    Dong Zhenxie.

    1985-08-01

    It is important to study the dynamical systems on 3-dimensional manifolds, its importance is showing up in its close relation with our life. Because of the complication of topological structure of Dynamical systems on 3-dimensional manifolds, generally speaking, the search for 3-dynamical systems is not easier than 2-dynamical systems. This paper is a summary of the partial result of dynamical systems on 3-dimensional manifolds. (author)

  15. Efficient evaluation of influence coefficients in three-dimensional extended boundary-node method for potential problems

    International Nuclear Information System (INIS)

    Itoh, Taku; Saitoh, Ayumu; Kamitani, Atsushi; Nakamura, Hiroaki

    2011-01-01

    For the purpose of speed-up of the three-dimensional eXtended Boundary-Node Method (X-BNM), an efficient algorithm for evaluating influence coefficients has been developed. The algorithm can be easily implemented into the X-BNM without using any integration cells. By applying the resulting X-BNM to the Laplace problem, the performance of the algorithm is numerically investigated. The numerical experiments show that, by using the algorithm, computational costs for evaluating influence coefficients in the X-BNM are reduced considerably. Especially for a large-sized problem, the algorithm is efficiently performed, and the computational costs of the X-BNM are close to those of the Boundary-Element Method (BEM). In addition, for the problem, the X-BNM shows almost the same accuracy as that of the BEM. (author)

  16. Classical Lie Point Symmetry Analysis of a Steady Nonlinear One-Dimensional Fin Problem

    Directory of Open Access Journals (Sweden)

    R. J. Moitsheki

    2012-01-01

    Full Text Available We consider the one-dimensional steady fin problem with the Dirichlet boundary condition at one end and the Neumann boundary condition at the other. Both the thermal conductivity and the heat transfer coefficient are given as arbitrary functions of temperature. We perform preliminary group classification to determine forms of the arbitrary functions appearing in the considered equation for which the principal Lie algebra is extended. Some invariant solutions are constructed. The effects of thermogeometric fin parameter and the exponent on temperature are studied. Also, the fin efficiency is analyzed.

  17. Introduction to high-dimensional statistics

    CERN Document Server

    Giraud, Christophe

    2015-01-01

    Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha

  18. High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.

    Science.gov (United States)

    Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton

    2017-11-03

    Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.

  19. On Riemann boundary value problems for null solutions of the two dimensional Helmholtz equation

    Science.gov (United States)

    Bory Reyes, Juan; Abreu Blaya, Ricardo; Rodríguez Dagnino, Ramón Martin; Kats, Boris Aleksandrovich

    2018-01-01

    The Riemann boundary value problem (RBVP to shorten notation) in the complex plane, for different classes of functions and curves, is still widely used in mathematical physics and engineering. For instance, in elasticity theory, hydro and aerodynamics, shell theory, quantum mechanics, theory of orthogonal polynomials, and so on. In this paper, we present an appropriate hyperholomorphic approach to the RBVP associated to the two dimensional Helmholtz equation in R^2 . Our analysis is based on a suitable operator calculus.

  20. Solution of the one-dimensional time-dependent discrete ordinates problem in a slab by the spectral and LTSN methods

    International Nuclear Information System (INIS)

    Oliveira, J.V.P. de; Cardona, A.V.; Vilhena, M.T.M.B. de

    2002-01-01

    In this work, we present a new approach to solve the one-dimensional time-dependent discrete ordinates problem (S N problem) in a slab. The main idea is based upon the application of the spectral method to the set of S N time-dependent differential equations and solution of the resulting coupling equations by the LTS N method. We report numerical simulations

  1. On the solution of the inverse scattering problem for the quadratic bundle of the one-dimensional Schroedinger operators of the whole axis

    International Nuclear Information System (INIS)

    Maksudov, F.G.; Gusejnov, G.Sh.

    1986-01-01

    Inverse scattering problem for the quadratic bundle of the Schroedinger one-dimensional operators in the whole axis is solved. The problem solution is given on the assumption of the discrete spectrum absence. In the discrete spectrum presence the inverse scattering problem solution is known for the Shroedinger differential equation considered

  2. Problems of high energy physics

    International Nuclear Information System (INIS)

    Kadyshevskij, V.G.

    1989-01-01

    Some problems of high energy physics are discussed. The main attention is paid to describibg the standard model. The model comprises quantum chromodynamics and electroweak interaction theory. The problem of CP breaking is considered as well. 8 refs.; 1 tab

  3. Spectral dimensionality of random superconducting networks

    International Nuclear Information System (INIS)

    Day, A.R.; Xia, W.; Thorpe, M.F.

    1988-01-01

    We compute the spectral dimensionality d of random superconducting-normal networks by directly examining the low-frequency density of states at the percolation threshold. We find that d = 4.1 +- 0.2 and 5.8 +- 0.3 in two and three dimensions, respectively, which confirms the scaling relation d = 2d/(2-s/ν), where s is the superconducting exponent and ν the correlation-length exponent for percolation. We also consider the one-dimensional problem where scaling arguments predict, and our numerical simulations confirm, that d = 0. A simple argument provides an expression for the density of states of the localized high-frequency modes in this special case. We comment on the connection between our calculations and the ''termite'' problem of a random walker on a random superconducting-normal network and point out difficulties in inferring d from simulations of the termite problem

  4. Distribution of high-dimensional entanglement via an intra-city free-space link.

    Science.gov (United States)

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-07-24

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.

  5. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  6. A phase change processor method for solving a one-dimensional phase change problem with convection boundary

    Energy Technology Data Exchange (ETDEWEB)

    Halawa, E.; Saman, W.; Bruno, F. [Institute for Sustainable Systems and Technologies, School of Advanced Manufacturing and Mechanical Engineering, University of South Australia, Mawson Lakes SA 5095 (Australia)

    2010-08-15

    A simple yet accurate iterative method for solving a one-dimensional phase change problem with convection boundary is described. The one-dimensional model takes into account the variation in the wall temperature along the direction of the flow as well as the sensible heat during preheating/pre-cooling of the phase change material (PCM). The mathematical derivation of convective boundary conditions has been integrated into a phase change processor (PCP) algorithm that solves the liquid fraction and temperature of the nodes. The algorithm is based on the heat balance at each node as it undergoes heating or cooling which inevitably involves phase change. The paper presents the model and its experimental validation. (author)

  7. A comparison of high-order polynomial and wave-based methods for Helmholtz problems

    Science.gov (United States)

    Lieu, Alice; Gabard, Gwénaël; Bériot, Hadrien

    2016-09-01

    The application of computational modelling to wave propagation problems is hindered by the dispersion error introduced by the discretisation. Two common strategies to address this issue are to use high-order polynomial shape functions (e.g. hp-FEM), or to use physics-based, or Trefftz, methods where the shape functions are local solutions of the problem (typically plane waves). Both strategies have been actively developed over the past decades and both have demonstrated their benefits compared to conventional finite-element methods, but they have yet to be compared. In this paper a high-order polynomial method (p-FEM with Lobatto polynomials) and the wave-based discontinuous Galerkin method are compared for two-dimensional Helmholtz problems. A number of different benchmark problems are used to perform a detailed and systematic assessment of the relative merits of these two methods in terms of interpolation properties, performance and conditioning. It is generally assumed that a wave-based method naturally provides better accuracy compared to polynomial methods since the plane waves or Bessel functions used in these methods are exact solutions of the Helmholtz equation. Results indicate that this expectation does not necessarily translate into a clear benefit, and that the differences in performance, accuracy and conditioning are more nuanced than generally assumed. The high-order polynomial method can in fact deliver comparable, and in some cases superior, performance compared to the wave-based DGM. In addition to benchmarking the intrinsic computational performance of these methods, a number of practical issues associated with realistic applications are also discussed.

  8. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan

    2017-03-27

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  9. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert; Ombao, Hernando

    2017-01-01

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  10. Genuinely high-dimensional nonlocality optimized by complementary measurements

    International Nuclear Information System (INIS)

    Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung

    2010-01-01

    Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.

  11. Does Anxiety Modify the Risk for, or Severity of, Conduct Problems Among Children With Co-Occurring ADHD: Categorical and Dimensional and Analyses.

    Science.gov (United States)

    Danforth, Jeffrey S; Doerfler, Leonard A; Connor, Daniel F

    2017-08-01

    The goal was to examine whether anxiety modifies the risk for, or severity of, conduct problems in children with ADHD. Assessment included both categorical and dimensional measures of ADHD, anxiety, and conduct problems. Analyses compared conduct problems between children with ADHD features alone versus children with co-occurring ADHD and anxiety features. When assessed by dimensional rating scales, results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety are at risk for more intense conduct problems. When assessment included a Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) diagnosis via the Schedule for Affective Disorders and Schizophrenia for School Age Children-Epidemiologic Version (K-SADS), results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety neither had more intense conduct problems nor were they more likely to be diagnosed with oppositional defiant disorder or conduct disorder. Different methodological measures of ADHD, anxiety, and conduct problem features influenced the outcome of the analyses.

  12. Overcoming the sign problem in 1-dimensional QCD by new integration rules with polynomial exactness

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, A. [IVU-Traffic Technologies AG, Berlin (Germany); Hartung, T. [King' s College London (United Kingdom). Dept. of Mathematics; Jansen, K.; Volmer, J. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, H. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik

    2016-08-15

    In this paper we describe a new integration method for the groups U(N) and SU(N), for which we verified numerically that it is polynomially exact for N≤3. The method is applied to the example of 1-dimensional QCD with a chemical potential. We explore, in particular, regions of the parameter space in which the sign problem appears due the presence of the chemical potential. While Markov Chain Monte Carlo fails in this region, our new integration method still provides results for the chiral condensate on arbitrary precision, demonstrating clearly that it overcomes the sign problem. Furthermore, we demonstrate that our new method leads to orders of magnitude reduced errors also in other regions of parameter space.

  13. Topology of high-dimensional manifolds

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)

    2002-08-15

    The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.

  14. Two-dimensional lift-up problem for a rigid porous bed

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.; Huang, L. H.; Yang, F. P. Y. [Department of Civil Engineering, National Taiwan University, Taipei, Taiwan (China)

    2015-05-15

    The present study analytically reinvestigates the two-dimensional lift-up problem for a rigid porous bed that was studied by Mei, Yeung, and Liu [“Lifting of a large object from a porous seabed,” J. Fluid Mech. 152, 203 (1985)]. Mei, Yeung, and Liu proposed a model that treats the bed as a rigid porous medium and performed relevant experiments. In their model, they assumed the gap flow comes from the periphery of the gap, and there is a shear layer in the porous medium; the flow in the gap is described by adhesion approximation [D. J. Acheson, Elementary Fluid Dynamics (Clarendon, Oxford, 1990), pp. 243-245.] and the pore flow by Darcy’s law, and the slip-flow condition proposed by Beavers and Joseph [“Boundary conditions at a naturally permeable wall,” J. Fluid Mech. 30, 197 (1967)] is applied to the bed interface. In this problem, however, the gap flow initially mainly comes from the porous bed, and the shear layer may not exist. Although later the shear effect becomes important, the empirical slip-flow condition might not physically respond to the shear effect, and the existence of the vertical velocity affects the situation so greatly that the slip-flow condition might not be appropriate. In contrast, the present study proposes a more general model for the problem, applying Stokes flow to the gap, the Brinkman equation to the porous medium, and Song and Huang’s [“Laminar poroelastic media flow,” J. Eng. Mech. 126, 358 (2000)] complete interfacial conditions to the bed interface. The exact solution to the problem is found and fits Mei’s experiments well. The breakout phenomenon is examined for different soil beds, mechanics that cannot be illustrated by Mei’s model are revealed, and the theoretical breakout times obtained using Mei’s model and our model are compared. The results show that the proposed model is more compatible with physics and provides results that are more precise.

  15. Three-dimensional formulation of the relativistic two-body problem in terms of rapidities

    International Nuclear Information System (INIS)

    Amirkhanov, I.V.; Grusha, G.V.; Mir-Kasimov, R.M.

    1976-01-01

    The scheme, based on the three-dimensional relativistic equation of the quasi-potential type is developed. As a basic variable rapidity, canonically conjugated to the relativistic relative distance is adopted. The free Green function has a simple pole in the complex rapidity plane, ensuring the fulfillment of the elastic unitarity for real potentials. In the local potential case the corresponding partial wave equation in configurational r-representation is a differential second-order equation. The problem of boundary conditions, which is a non-trivial one in the relativistic r-space, is studied. The exact solutions of the equation in simple cases have been found

  16. Two-dimensional impurity transport calculations for a high recycling divertor

    International Nuclear Information System (INIS)

    Brooks, J.N.

    1986-04-01

    Two dimensional analysis of impurity transport in a high recycling divertor shows asymmetric particle fluxes to the divertor plate, low helium pumping efficiency, and high scrapeoff zone shielding for sputtered impurities

  17. The finite element solution of two-dimensional transverse magnetic scattering problems on the connection machine

    International Nuclear Information System (INIS)

    Hutchinson, S.; Costillo, S.; Dalton, K.; Hensel, E.

    1990-01-01

    A study is conducted of the finite element solution of the partial differential equations governing two-dimensional electromagnetic field scattering problems on a SIMD computer. A nodal assembly technique is introduced which maps a single node to a single processor. The physical domain is first discretized in parallel to yield the node locations of an O-grid mesh. Next, the system of equations is assembled and then solved in parallel using a conjugate gradient algorithm for complex-valued, non-symmetric, non-positive definite systems. Using this technique and Thinking Machines Corporation's Connection Machine-2 (CM-2), problems with more than 250k nodes are solved. Results of electromagnetic scattering, governed by the 2-d scalar Hemoholtz wave equations are presented in this paper. Solutions are demonstrated for a wide range of objects. A summary of performance data is given for the set of test problems

  18. On the Zeeman Effect in highly excited atoms: 2. Three-dimensional case

    International Nuclear Information System (INIS)

    Baseia, B.; Medeiros e Silva Filho, J.

    1984-01-01

    A previous result, found in two-dimensional hydrogen-atoms, is extended to the three-dimensional case. A mapping of a four-dimensional space R 4 onto R 3 , that establishes an equivalence between Coulomb and harmonic potentials, is used to show that the exact solution of the Zeeman effect in highly excited atoms, cannot be reached. (Author) [pt

  19. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huttmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.; Bednarczyk, P.

    1992-01-01

    High resolution γ-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig

  20. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S; Huttmeier, U J; France, G de; Haas, B; Romain, P; Theisen, Ch; Vivien, J P; Zen, J [Centre National de la Recherche Scientifique (CNRS), 67 - Strasbourg (France); Bednarczyk, P [Institute of Nuclear Physics, Cracow (Poland)

    1992-08-01

    High resolution {gamma}-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig.

  1. On Robust Information Extraction from High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science

  2. Exact Solution of the Two-Dimensional Problem on an Impact Ideal-Liquid Jet

    Science.gov (United States)

    Belik, V. D.

    2018-05-01

    The two-dimensional problem on the collision of a potential ideal-liquid jet, outflowing from a reservoir through a nozzle, with an infinite plane obstacle was considered for the case where the distance between the nozzle exit section and the obstacle is finite. An exact solution of this problem has been found using methods of the complex-variable function theory. Simple analytical expressions for the complex velocity of the liquid, its flow rate, and the force of action of the jet on the obstacle have been obtained. The velocity distributions of the liquid at the nozzle exit section, in the region of spreading of the jet, and at the obstacle have been constructed for different distances between the nozzle exit section and the obstacle. Analytical expressions for the thickness of the boundary layer and the Nusselt number at the point of stagnation of the jet have been obtained. A number of distributions of the local friction coefficient and the Nusselt number of the indicated jet are presented.

  3. An analytical discrete ordinates solution for a nodal model of a two-dimensional neutron transport problem

    International Nuclear Information System (INIS)

    Filho, J. F. P.; Barichello, L. B.

    2013-01-01

    In this work, an analytical discrete ordinates method is used to solve a nodal formulation of a neutron transport problem in x, y-geometry. The proposed approach leads to an important reduction in the order of the associated eigenvalue systems, when combined with the classical level symmetric quadrature scheme. Auxiliary equations are proposed, as usually required for nodal methods, to express the unknown fluxes at the boundary introduced as additional unknowns in the integrated equations. Numerical results, for the problem defined by a two-dimensional region with a spatially constant and isotropically emitting source, are presented and compared with those available in the literature. (authors)

  4. High-resolution coherent three-dimensional spectroscopy of Br2.

    Science.gov (United States)

    Chen, Peter C; Wells, Thresa A; Strangfeld, Benjamin R

    2013-07-25

    In the past, high-resolution spectroscopy has been limited to small, simple molecules that yield relatively uncongested spectra. Larger and more complex molecules have a higher density of peaks and are susceptible to complications (e.g., effects from conical intersections) that can obscure the patterns needed to resolve and assign peaks. Recently, high-resolution coherent two-dimensional (2D) spectroscopy has been used to resolve and sort peaks into easily identifiable patterns for molecules where pattern-recognition has been difficult. For very highly congested spectra, however, the ability to resolve peaks using coherent 2D spectroscopy is limited by the bandwidth of instrumentation. In this article, we introduce and investigate high-resolution coherent three-dimensional spectroscopy (HRC3D) as a method for dealing with heavily congested systems. The resulting patterns are unlike those in high-resolution coherent 2D spectra. Analysis of HRC3D spectra could provide a means for exploring the spectroscopy of large and complex molecules that have previously been considered too difficult to study.

  5. Three-dimensional triplet tracking for LHC and future high rate experiments

    International Nuclear Information System (INIS)

    Schöning, A

    2014-01-01

    The hit combinatorial problem is a main challenge for track reconstruction and triggering at high rate experiments. At hadron colliders the dominant fraction of hits is due to low momentum tracks for which multiple scattering (MS) effects dominate the hit resolution. MS is also the dominating source for hit confusion and track uncertainties in low energy precision experiments. In all such environments, where MS dominates, track reconstruction and fitting can be largely simplified by using three-dimensional (3D) hit-triplets as provided by pixel detectors. This simplification is possible since track uncertainties are solely determined by MS if high precision spatial information is provided. Fitting of hit-triplets is especially simple for tracking detectors in solenoidal magnetic fields. The over-constrained 3D-triplet method provides a complete set of track parameters and is robust against fake hit combinations. Full tracks can be reconstructed step-wise by connecting hit triplet combinations from different layers, thus heavily reducing the combinatorial problem and accelerating track linking. The triplet method is ideally suited for pixel detectors where hits can be treated as 3D-space points. With the advent of relatively cheap and industrially available CMOS-sensors the construction of highly granular full scale pixel tracking detectors seems to be possible also for experiments at LHC or future high energy (hadron) colliders. In this paper tracking performance studies for full-scale pixel detectors, including their optimisation for 3D-triplet tracking, are presented. The results obtained for different types of tracker geometries and different reconstruction methods are compared. The potential of reducing the number of tracking layers and - along with that - the material budget using this new tracking concept is discussed. The possibility of using 3D-triplet tracking for triggering and fast online reconstruction is highlighted

  6. Convergence rates and finite-dimensional approximations for nonlinear ill-posed problems involving monotone operators in Banach spaces

    International Nuclear Information System (INIS)

    Nguyen Buong.

    1992-11-01

    The purpose of this paper is to investigate convergence rates for an operator version of Tikhonov regularization constructed by dual mapping for nonlinear ill-posed problems involving monotone operators in real reflective Banach spaces. The obtained results are considered in combination with finite-dimensional approximations for the space. An example is considered for illustration. (author). 15 refs

  7. An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach

    KAUST Repository

    Asiri, Sharefa M.

    2013-05-25

    Observers are well known in the theory of dynamical systems. They are used to estimate the states of a system from some measurements. However, recently observers have also been developed to estimate some unknowns for systems governed by Partial differential equations. Our aim is to design an observer to solve inverse source problem for a one dimensional wave equation. Firstly, the problem is discretized in both space and time and then an adaptive observer based on partial field measurements (i.e measurements taken form the solution of the wave equation) is applied to estimate both the states and the source. We see the effectiveness of this observer in both noise-free and noisy cases. In each case, numerical simulations are provided to illustrate the effectiveness of this approach. Finally, we compare the performance of the observer approach with Tikhonov regularization approach.

  8. Shilajit: A panacea for high-altitude problems.

    Science.gov (United States)

    Meena, Harsahay; Pandey, H K; Arya, M C; Ahmed, Zakwan

    2010-01-01

    High altitude problems like hypoxia, acute mountain sickness, high altitude cerebral edema, pulmonary edema, insomnia, tiredness, lethargy, lack of appetite, body pain, dementia, and depression may occur when a person or a soldier residing in a lower altitude ascends to high-altitude areas. These problems arise due to low atmospheric pressure, severe cold, high intensity of solar radiation, high wind velocity, and very high fluctuation of day and night temperatures in these regions. These problems may escalate rapidly and may sometimes become life-threatening. Shilajit is a herbomineral drug which is pale-brown to blackish-brown, is composed of a gummy exudate that oozes from the rocks of the Himalayas in the summer months. It contains humus, organic plant materials, and fulvic acid as the main carrier molecules. It actively takes part in the transportation of nutrients into deep tissues and helps to overcome tiredness, lethargy, and chronic fatigue. Shilajit improves the ability to handle high altitudinal stresses and stimulates the immune system. Thus, Shilajit can be given as a supplement to people ascending to high-altitude areas so that it can act as a "health rejuvenator" and help to overcome high-altitude related problems.

  9. High-intensity ionization approximations: test of convergence in a one-dimensional model

    International Nuclear Information System (INIS)

    Antunes Neto, H.S.; Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro); Davidovich, L.; Marchesin, D.

    1983-06-01

    By solving numerically a one-dimensional model, the range of validity of some non-perturbative treatments proposed for the problem of atomic ionization by strong laser fields is examined. Some scalling properties of the ionization probability are stablished and a new approximation, which converges to the exact results in the limit of very strong fields is proposed. (Author) [pt

  10. Inference in High-dimensional Dynamic Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Tang, Haihan

    We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...

  11. (Weakly) three-dimensional caseology

    International Nuclear Information System (INIS)

    Pomraning, G.C.

    1996-01-01

    The singular eigenfunction technique of Case for solving one-dimensional planar symmetry linear transport problems is extended to a restricted class of three-dimensional problems. This class involves planar geometry, but with forcing terms (either boundary conditions or internal sources) which are weakly dependent upon the transverse spatial variables. Our analysis involves a singular perturbation about the classic planar analysis, and leads to the usual Case discrete and continuum modes, but modulated by weakly dependent three-dimensional spatial functions. These functions satisfy parabolic differential equations, with a different diffusion coefficient for each mode. Representative one-speed time-independent transport problems are solved in terms of these generalised Case eigenfunctions. Our treatment is very heuristic, but may provide an impetus for more rigorous analysis. (author)

  12. Procedures for two-dimensional electrophoresis of proteins

    Energy Technology Data Exchange (ETDEWEB)

    Tollaksen, S.L.; Giometti, C.S.

    1996-10-01

    High-resolution two-dimensional gel electrophoresis (2DE) of proteins, using isoelectric focusing in the first dimension and sodium dodecyl sulfate/polyacrylamide gel electrophoresis (SDS-PAGE) in the second, was first described in 1975. In the 20 years since those publications, numerous modifications of the original method have evolved. The ISO-DALT system of 2DE is a high-throughput approach that has stood the test of time. The problem of casting many isoelectric focusing gels and SDS-PAGE slab gels (up to 20) in a reproducible manner has been solved by the use of the techniques and equipment described in this manual. The ISO-DALT system of two-dimensional gel electrophoresis originated in the late 1970s and has been modified many times to improve its high-resolution, high-throughput capabilities. This report provides the detailed procedures used with the current ISO-DALT system to prepare, run, stain, and photograph two-dimensional gels for protein analysis.

  13. [Application Progress of Three-dimensional Laser Scanning Technology in Medical Surface Mapping].

    Science.gov (United States)

    Zhang, Yonghong; Hou, He; Han, Yuchuan; Wang, Ning; Zhang, Ying; Zhu, Xianfeng; Wang, Mingshi

    2016-04-01

    The booming three-dimensional laser scanning technology can efficiently and effectively get spatial three-dimensional coordinates of the detected object surface and reconstruct the image at high speed,high precision and large capacity of information.Non-radiation,non-contact and the ability of visualization make it increasingly popular in three-dimensional surface medical mapping.This paper reviews the applications and developments of three-dimensional laser scanning technology in medical field,especially in stomatology,plastic surgery and orthopedics.Furthermore,the paper also discusses the application prospects in the future as well as the biomedical engineering problems it would encounter with.

  14. High-dimensional orbital angular momentum entanglement concentration based on Laguerre–Gaussian mode selection

    International Nuclear Information System (INIS)

    Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang

    2013-01-01

    Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)

  15. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...

  16. Method for coupling two-dimensional to three-dimensional discrete ordinates calculations

    International Nuclear Information System (INIS)

    Thompson, J.L.; Emmett, M.B.; Rhoades, W.A.; Dodds, H.L. Jr.

    1985-01-01

    A three-dimensional (3-D) discrete ordinates transport code, TORT, has been developed at the Oak Ridge National Laboratory for radiation penetration studies. It is not feasible to solve some 3-D penetration problems with TORT, such as a building located a large distance from a point source, because (a) the discretized 3-D problem is simply too big to fit on the computer or (b) the computing time (and corresponding cost) is prohibitive. Fortunately, such problems can be solved with a hybrid approach by coupling a two-dimensional (2-D) description of the point source, which is assumed to be azimuthally symmetric, to a 3-D description of the building, the region of interest. The purpose of this paper is to describe this hybrid methodology along with its implementation and evaluation in the DOTTOR (Discrete Ordinates to Three-dimensional Oak Ridge Transport) code

  17. A parameter identification problem arising from a two-dimensional airfoil section model

    International Nuclear Information System (INIS)

    Cerezo, G.M.

    1994-01-01

    The development of state space models for aeroelastic systems, including unsteady aerodynamics, is particularly important for the design of highly maneuverable aircraft. In this work we present a state space formulation for a special class of singular neutral functional differential equations (SNFDE) with initial data in C(-1, 0). This work is motivated by the two-dimensional airfoil model presented by Burns, Cliff and Herdman in. In the same authors discuss the validity of the assumptions under which the model was formulated. They pay special attention to the derivation of the evolution equation for the circulation on the airfoil. This equation was coupled to the rigid-body dynamics of the airfoil in order to obtain a complete set of functional differential equations that describes the composite system. The resulting mathematical model for the aeroelastic system has a weakly singular component. In this work we consider a finite delay approximation to the model presented in. We work with a scalar model in which we consider the weak singularity appearing in the original problem. The main goal of this work is to develop numerical techniques for the identification of the parameters appearing in the kernel of the associated scalar integral equation. Clearly this is the first step in the study of parameter identification for the original model and the corresponding validation of this model for the aeroelastic system

  18. Minimizing waste (off-cuts using cutting stock model: The case of one dimensional cutting stock problem in wood working industry

    Directory of Open Access Journals (Sweden)

    Gbemileke A. Ogunranti

    2016-09-01

    Full Text Available Purpose: The main objective of this study is to develop a model for solving the one dimensional cutting stock problem in the wood working industry, and develop a program for its implementation. Design/methodology/approach: This study adopts the pattern oriented approach in the formulation of the cutting stock model. A pattern generation algorithm was developed and coded using Visual basic.NET language. The cutting stock model developed is a Linear Programming (LP Model constrained by numerous feasible patterns. A LP solver was integrated with the pattern generation algorithm program to develop a one - dimensional cutting stock model application named GB Cutting Stock Program. Findings and Originality/value: Applying the model to a real life optimization problem significantly reduces material waste (off-cuts and minimizes the total stock used. The result yielded about 30.7% cost savings for company-I when the total stock materials used is compared with the former cutting plan. Also, to evaluate the efficiency of the application, Case I problem was solved using two top commercial 1D-cutting stock software.  The results show that the GB program performs better when related results were compared. Research limitations/implications: This study round up the linear programming solution for the number of pattern to cut. Practical implications: From Managerial perspective, implementing optimized cutting plans increases productivity by eliminating calculating errors and drastically reducing operator mistakes. Also, financial benefits that can annually amount to millions in cost savings can be achieved through significant material waste reduction. Originality/value: This paper developed a linear programming one dimensional cutting stock model based on a pattern generation algorithm to minimize waste in the wood working industry. To implement the model, the algorithm was coded using VisualBasic.net and linear programming solver called lpsolvedll (dynamic

  19. NUMERICAL METHOD OF MIXED FINITE VOLUME-MODIFIED UPWIND FRACTIONAL STEP DIFFERENCE FOR THREE-DIMENSIONAL SEMICONDUCTOR DEVICE TRANSIENT BEHAVIOR PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Yirang YUAN; Qing YANG; Changfeng LI; Tongjun SUN

    2017-01-01

    Transient behavior of three-dimensional semiconductor device with heat conduction is described by a coupled mathematical system of four quasi-linear partial differential equations with initial-boundary value conditions.The electric potential is defined by an elliptic equation and it appears in the following three equations via the electric field intensity.The electron concentration and the hole concentration are determined by convection-dominated diffusion equations and the temperature is interpreted by a heat conduction equation.A mixed finite volume element approximation,keeping physical conservation law,is used to get numerical values of the electric potential and the accuracy is improved one order.Two concentrations and the heat conduction are computed by a fractional step method combined with second-order upwind differences.This method can overcome numerical oscillation,dispersion and decreases computational complexity.Then a three-dimensional problem is solved by computing three successive one-dimensional problems where the method of speedup is used and the computational work is greatly shortened.An optimal second-order error estimate in L2 norm is derived by using prior estimate theory and other special techniques of partial differential equations.This type of mass-conservative parallel method is important and is most valuable in numerical analysis and application of semiconductor device.

  20. High-dimensional single-cell cancer biology.

    Science.gov (United States)

    Irish, Jonathan M; Doxie, Deon B

    2014-01-01

    Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.

  1. An analytical approach for a nodal formulation of a two-dimensional fixed-source neutron transport problem in heterogeneous medium

    Energy Technology Data Exchange (ETDEWEB)

    Basso Barichello, Liliane; Dias da Cunha, Rudnei [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst. de Matematica; Becker Picoloto, Camila [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Tres, Anderson [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada

    2015-05-15

    A nodal formulation of a fixed-source two-dimensional neutron transport problem, in Cartesian geometry, defined in a heterogeneous medium, is solved by an analytical approach. Explicit expressions, in terms of the spatial variables, are derived for averaged fluxes in each region in which the domain is subdivided. The procedure is an extension of an analytical discrete ordinates method, the ADO method, for the solution of the two-dimensional homogeneous medium case. The scheme is developed from the discrete ordinates version of the two-dimensional transport equation along with the level symmetric quadrature scheme. As usual for nodal schemes, relations between the averaged fluxes and the unknown angular fluxes at the contours are introduced as auxiliary equations. Numerical results are in agreement with results available in the literature.

  2. Estimation of surface temperature by using inverse problem. Part 1. Steady state analyses of two-dimensional cylindrical system

    International Nuclear Information System (INIS)

    Takahashi, Toshio; Terada, Atsuhiko

    2006-03-01

    In the corrosive process environment of thermochemical hydrogen production Iodine-Sulfur process plant, there is a difficulty in the direct measurement of surface temperature of the structural materials. An inverse problem method can effectively be applied for this problem, which enables estimation of the surface temperature using the temperature data at the inside of structural materials. This paper shows analytical results of steady state temperature distributions in a two-dimensional cylindrical system cooled by impinging jet flow, and clarifies necessary order of multiple-valued function from the viewpoint of engineeringly satisfactory precision. (author)

  3. Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.

    Science.gov (United States)

    Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen

    2017-12-01

    In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.

  4. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  5. High-Dimensional Quantum Information Processing with Linear Optics

    Science.gov (United States)

    Fitzpatrick, Casey A.

    Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for

  6. An investigation on a two-dimensional problem of Mode-I crack in a thermoelastic medium

    Science.gov (United States)

    Kant, Shashi; Gupta, Manushi; Shivay, Om Namha; Mukhopadhyay, Santwana

    2018-04-01

    In this work, we consider a two-dimensional dynamical problem of an infinite space with finite linear Mode-I crack and employ a recently proposed heat conduction model: an exact heat conduction with a single delay term. The thermoelastic medium is taken to be homogeneous and isotropic. However, the boundary of the crack is subjected to a prescribed temperature and stress distributions. The Fourier and Laplace transform techniques are used to solve the problem. Mathematical modeling of the present problem reduces the solution of the problem into the solution of a system of four dual integral equations. The solution of these equations is equivalent to the solution of the Fredholm's integral equation of the first kind which has been solved by using the regularization method. Inverse Laplace transform is carried out by using the Bellman method, and we obtain the numerical solution for all the physical field variables in the physical domain. Results are shown graphically, and we highlight the effects of the presence of crack in the behavior of thermoelastic interactions inside the medium in the present context, and its results are compared with the results of the thermoelasticity of type-III.

  7. A survey on coordinate metrology using dimensional X-ray CT

    International Nuclear Information System (INIS)

    Matsuzaki, Kazuya

    2016-01-01

    X-ray computed tomography (X-ray CT) has been occupying indispensable position in geometrical and dimensional measurements in industry, which is capable of measuring both external and internal dimensions of industrial products. Since dimensional X-ray CT has problems about ensuring traceability and estimating uncertainty, requirement of developing measurement standard for dimensional X-ray CT is increasing. Some of national metrology institutes (NMIs) including NMIJ have been working on developing measurement standard. In this report, the background of coordinate metrology using dimensional X-ray CT is reviewed. Then, measurement error sources are discussed. Finally, the plan to develop high accuracy dimensional X-ray CT is presented. (author)

  8. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  9. A model problem for restricted-data gamma ray emission tomography of highly active nuclear waste

    International Nuclear Information System (INIS)

    Cattle, Brian A.

    2007-01-01

    This paper develops the work of Cattle et al. [Cattle, B.A., Fellerman, A.S., West, R.M., 2004. On the detection of solid deposits using gamma ray emission tomography with limited data. Measurement Science and Technology 15, 1429-1439] by considering a generalization of the model employed therein. The focus of the work is the gamma ray tomographic analysis of high-level waste processing. The work in this paper considers a two-dimensional model for the measurement of gamma ray photon flux, as opposed to the previous one-dimensional analysis via the integrated Beer-Lambert law. The mathematical inverse problem that arises in determining physical quantities from the photon count measurements is tackled using Bayesian statistical methods that are implemented computationally using a Markov chain Monte Carlo (MCMC) approach. In a further new development, the effect of the degree of collimation of the detector on the reliability of the solutions is also considered

  10. Determinable solutions for one-dimensional quantum potentials: scattering, quasi-bound and bound-state problems

    International Nuclear Information System (INIS)

    Lee, Hwasung; Lee, Y J

    2007-01-01

    We derive analytic expressions of the recursive solutions to Schroedinger's equation by means of a cutoff-potential technique for one-dimensional piecewise-constant potentials. These solutions provide a method for accurately determining the transmission probabilities as well as the wavefunction in both classically accessible regions and inaccessible regions for any barrier potentials. It is also shown that the energy eigenvalues and the wavefunctions of bound states can be obtained for potential-well structures by exploiting this method. Calculational results of illustrative examples are shown in order to verify this method for treating barrier and potential-well problems

  11. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  12. Interface between path and orbital angular momentum entanglement for high-dimensional photonic quantum information.

    Science.gov (United States)

    Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton

    2014-07-30

    Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.

  13. A block-iterative nodal integral method for forced convection problems

    International Nuclear Information System (INIS)

    Decker, W.J.; Dorning, J.J.

    1992-01-01

    A new efficient iterative nodal integral method for the time-dependent two- and three-dimensional incompressible Navier-Stokes equations has been developed. Using the approach introduced by Azmy and Droning to develop nodal mehtods with high accuracy on coarse spatial grids for two-dimensional steady-state problems and extended to coarse two-dimensional space-time grids by Wilson et al. for thermal convection problems, we have developed a new iterative nodal integral method for the time-dependent Navier-Stokes equations for mechanically forced convection. A new, extremely efficient block iterative scheme is employed to invert the Jacobian within each of the Newton-Raphson iterations used to solve the final nonlinear discrete-variable equations. By taking advantage of the special structure of the Jacobian, this scheme greatly reduces memory requirements. The accuracy of the overall method is illustrated by appliying it to the time-dependent version of the classic two-dimensional driven cavity problem of computational fluid dynamics

  14. Reply to "Comment on 'Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit' ".

    Science.gov (United States)

    Gebremedhin, Daniel H; Weatherford, Charles A

    2015-02-01

    This is a response to the comment we received on our recent paper "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit." In that paper, we introduced a computational algorithm that is appropriate for solving stiff initial value problems, and which we applied to the one-dimensional time-independent Schrödinger equation with a soft Coulomb potential. We solved for the eigenpairs using a shooting method and hence turned it into an initial value problem. In particular, we examined the behavior of the eigenpairs as the softening parameter approached zero (hard Coulomb limit). The commenters question the existence of the ground state of the hard Coulomb potential, which we inferred by extrapolation of the softening parameter to zero. A key distinction between the commenters' approach and ours is that they consider only the half-line while we considered the entire x axis. Based on mathematical considerations, the commenters consider only a vanishing solution function at the origin, and they question our conclusion that the ground state of the hard Coulomb potential exists. The ground state we inferred resembles a δ(x), and hence it cannot even be addressed based on their argument. For the excited states, there is agreement with the fact that the particle is always excluded from the origin. Our discussion with regard to the symmetry of the excited states is an extrapolation of the soft Coulomb case and is further explained herein.

  15. Elucidating high-dimensional cancer hallmark annotation via enriched ontology.

    Science.gov (United States)

    Yan, Shankai; Wong, Ka-Chun

    2017-09-01

    Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  17. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    International Nuclear Information System (INIS)

    Zhang, Yuxiao; Zhang, Jianming; Liu, Yang; Huang, Hui; Kang, Zhenhui

    2012-01-01

    Highlights: ► Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. ► MPCS was covalently modified by cysteine (MPCS–CO–Cys). ► MPCS–CO–Cys was first time used in electrochemical detection of heavy metal ions. ► Heavy metal ions such as Pb 2+ and Cd 2+ can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  18. Simulation and Analysis of Converging Shock Wave Test Problems

    Energy Technology Data Exchange (ETDEWEB)

    Ramsey, Scott D. [Los Alamos National Laboratory; Shashkov, Mikhail J. [Los Alamos National Laboratory

    2012-06-21

    Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the original problem, and minimally straining the general credibility of associated analysis and conclusions.

  19. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  20. Smooth controllability of infinite-dimensional quantum-mechanical systems

    International Nuclear Information System (INIS)

    Wu, Re-Bing; Tarn, Tzyh-Jong; Li, Chun-Wen

    2006-01-01

    Manipulation of infinite-dimensional quantum systems is important to controlling complex quantum dynamics with many practical physical and chemical backgrounds. In this paper, a general investigation is casted to the controllability problem of quantum systems evolving on infinite-dimensional manifolds. Recognizing that such problems are related with infinite-dimensional controllability algebras, we introduce an algebraic mathematical framework to describe quantum control systems possessing such controllability algebras. Then we present the concept of smooth controllability on infinite-dimensional manifolds, and draw the main result on approximate strong smooth controllability. This is a nontrivial extension of the existing controllability results based on the analysis over finite-dimensional vector spaces to analysis over infinite-dimensional manifolds. It also opens up many interesting problems for future studies

  1. Junior High School Students’ Perception about Simple Environmental Problem as an Impact of Problem based Learning

    Science.gov (United States)

    Tapilouw, M. C.; Firman, H.; Redjeki, S.; Chandra, D. T.

    2017-09-01

    Environmental problem is a real problem that occur in student’s daily life. Junior high school students’ perception about environmental problem is interesting to be investigated. The major aim of this study is to explore junior high school students’ perception about environmental problems around them and ways to solve the problem. The subject of this study is 69 Junior High School Students from two Junior High School in Bandung. This study use two open ended question. The core of first question is environmental problem around them (near school or house). The core of second question is the way to prevent or to solve the problem. These two question are as an impact of problem based learning in science learning. There are two major findings in this study. The first finding, based on most students’ perception, plastic waste cause an environmental problem. The second finding, environmental awareness can be a solution to prevent environmental pollution. The third finding, most student can classify environmental pollution into land, water and air pollution. We can conclude that Junior High School Students see the environmental problem as a phenomenon and teacher can explore environmental problem to guide the way of preventing and resolving environmental problem.

  2. GPU Implementation of High Rayleigh Number Three-Dimensional Mantle Convection

    Science.gov (United States)

    Sanchez, D. A.; Yuen, D. A.; Wright, G. B.; Barnett, G. A.

    2010-12-01

    Although we have entered the age of petascale computing, many factors are still prohibiting high-performance computing (HPC) from infiltrating all suitable scientific disciplines. For this reason and others, application of GPU to HPC is gaining traction in the scientific world. With its low price point, high performance potential, and competitive scalability, GPU has been an option well worth considering for the last few years. Moreover with the advent of NVIDIA's Fermi architecture, which brings ECC memory, better double-precision performance, and more RAM to GPU, there is a strong message of corporate support for GPU in HPC. However many doubts linger concerning the practicality of using GPU for scientific computing. In particular, GPU has a reputation for being difficult to program and suitable for only a small subset of problems. Although inroads have been made in addressing these concerns, for many scientists GPU still has hurdles to clear before becoming an acceptable choice. We explore the applicability of GPU to geophysics by implementing a three-dimensional, second-order finite-difference model of Rayleigh-Benard thermal convection on an NVIDIA GPU using C for CUDA. Our code reaches sufficient resolution, on the order of 500x500x250 evenly-spaced finite-difference gridpoints, on a single GPU. We make extensive use of highly optimized CUBLAS routines, allowing us to achieve performance on the order of O( 0.1 ) µs per timestep*gridpoint at this resolution. This performance has allowed us to study high Rayleigh number simulations, on the order of 2x10^7, on a single GPU.

  3. A hybridized K-means clustering approach for high dimensional ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.

  4. Two-dimensional wave propagation in layered periodic media

    KAUST Repository

    Quezada de Luna, Manuel

    2014-09-16

    We study two-dimensional wave propagation in materials whose properties vary periodically in one direction only. High order homogenization is carried out to derive a dispersive effective medium approximation. One-dimensional materials with constant impedance exhibit no effective dispersion. We show that a new kind of effective dispersion may arise in two dimensions, even in materials with constant impedance. This dispersion is a macroscopic effect of microscopic diffraction caused by spatial variation in the sound speed. We analyze this dispersive effect by using highorder homogenization to derive an anisotropic, dispersive effective medium. We generalize to two dimensions a homogenization approach that has been used previously for one-dimensional problems. Pseudospectral solutions of the effective medium equations agree to high accuracy with finite volume direct numerical simulations of the variable-coeffi cient equations.

  5. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  6. Classical many-body problems amenable to exact treatments (solvable and/or integrable and/or linearizable...) in one-, two- and three-dimensional space

    CERN Document Server

    Calogero, Francesco

    2001-01-01

    This book focuses on exactly treatable classical (i.e. non-quantal non-relativistic) many-body problems, as described by Newton's equation of motion for mutually interacting point particles. Most of the material is based on the author's research and is published here for the first time in book form. One of the main novelties is the treatment of problems in two- and three-dimensional space. Many related techniques are presented, e.g. the theory of generalized Lagrangian-type interpolation in higher-dimensional spaces. This book is written for students as well as for researchers; it works out detailed examples before going on to treat more general cases. Many results are presented via exercises, with clear hints pointing to their solutions.

  7. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  8. Shopping Problems among High School Students

    Science.gov (United States)

    Grant, Jon E.; Potenza, Marc N.; Krishnan-Sarin, Suchitra; Cavallo, Dana A.; Desai, Rani A.

    2010-01-01

    Background Although shopping behavior among adolescents is normal, for some the shopping becomes problematic. An assessment of adolescent shopping behavior along a continuum of severity and its relationship to other behaviors and health issues is incompletely understood. Methods A large sample of high school students (n=3999) was examined using a self-report survey with 153 questions concerning demographic characteristics, shopping behaviors, other health behaviors including substance use, and functioning variables such as grades and violent behavior. Results The overall prevalence of problem shopping was 3.5% (95%CI: 2.93–4.07). Regular smoking, marijuana and other drug use, sadness and hopelessness, and antisocial behaviors (e.g., fighting, carrying weapons) were associated with problem shopping behavior in both boys and girls. Heavy alcohol use was significantly associated with problem shopping only in girls. Conclusion Problem shopping appears fairly common among high school students and is associated with symptoms of depression and a range of potentially addictive and antisocial behaviors. Significant distress and diminished behavioral control suggest that excessive shopping may often have significant associated morbidity. Additional research is needed to develop specific prevention and treatment strategies for adolescents who report problems with shopping. PMID:21497217

  9. Shopping problems among high school students.

    Science.gov (United States)

    Grant, Jon E; Potenza, Marc N; Krishnan-Sarin, Suchitra; Cavallo, Dana A; Desai, Rani A

    2011-01-01

    Although shopping behavior among adolescents is normal, for some, the shopping becomes problematic. An assessment of adolescent shopping behavior along a continuum of severity and its relationship to other behaviors and health issues is incompletely understood. A large sample of high school students (n = 3999) was examined using a self-report survey with 153 questions concerning demographic characteristics, shopping behaviors, other health behaviors including substance use, and functioning variables such as grades and violent behavior. The overall prevalence of problem shopping was 3.5% (95% CI, 2.93-4.07). Regular smoking, marijuana and other drug use, sadness and hopelessness, and antisocial behaviors (e.g., fighting, carrying weapons) were associated with problem shopping behavior in both boys and girls. Heavy alcohol use was significantly associated with problem shopping only in girls. Problem shopping appears fairly common among high school students and is associated with symptoms of depression and a range of potentially addictive and antisocial behaviors. Significant distress and diminished behavioral control suggest that excessive shopping may often have significant associated morbidity. Additional research is needed to develop specific prevention and treatment strategies for adolescents who report problems with shopping. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction

    Science.gov (United States)

    Cui, Tiangang; Marzouk, Youssef; Willcox, Karen

    2016-06-01

    Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.

  11. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-12-01

    Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  12. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    Science.gov (United States)

    Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2013-12-01

    Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  13. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  14. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuxiao; Zhang, Jianming [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Liu, Yang, E-mail: yangl@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Huang, Hui [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Kang, Zhenhui, E-mail: zhkang@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. Black-Right-Pointing-Pointer MPCS was covalently modified by cysteine (MPCS-CO-Cys). Black-Right-Pointing-Pointer MPCS-CO-Cys was first time used in electrochemical detection of heavy metal ions. Black-Right-Pointing-Pointer Heavy metal ions such as Pb{sup 2+} and Cd{sup 2+} can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  15. Solving Multiple Timetabling Problems at Danish High Schools

    DEFF Research Database (Denmark)

    Kristiansen, Simon

    name; Elective Course Student Sectioning. The problem is solved using ALNS and solutions are proven to be close to optimum. The algorithm has been implemented and made available for the majority of the high schools in Denmark. The second Student Sectioning problem presented is the sectioning of each...... high schools. Two types of consultations are presented; the Parental Consultation Timetabling Problem (PCTP) and the Supervisor Consultation Timetabling Problem (SCTP). One mathematical model containing both consultation types has been created and solved using an ALNS approach. The received solutions...... problems as mathematical models and solve them using operational research techniques. Two of the models and the suggested solution methods have resulted in implementations in an actual decision support software, and are hence available for the majority of the high schools in Denmark. These implementations...

  16. High-dimensional atom localization via spontaneously generated coherence in a microwave-driven atomic system.

    Science.gov (United States)

    Wang, Zhiping; Chen, Jinyu; Yu, Benli

    2017-02-20

    We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.

  17. Mitigating the Insider Threat Using High-Dimensional Search and Modeling

    National Research Council Canada - National Science Library

    Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago

    2006-01-01

    In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...

  18. Diffraction limited focusing with controllable arbitrary three-dimensional polarization

    International Nuclear Information System (INIS)

    Chen, Weibin; Zhan, Qiwen

    2010-01-01

    We propose a new approach that enables full control over the three-dimensional state of polarization and the field distribution near the focus of a high numerical aperture objective lens. By combining the electric dipole radiation and a vectorial diffraction method, the input field at the pupil plane for generating arbitrary three-dimensionally oriented linear polarization at the focal point with a diffraction limited spot size is found analytically by solving the inverse problem. Arbitrary three-dimensional elliptical polarization can be obtained by introducing a second electric dipole oriented in the orthogonal plane with appropriate amplitude and phase differences

  19. Indoor high precision three-dimensional positioning system based on visible light communication using modified genetic algorithm

    Science.gov (United States)

    Chen, Hao; Guan, Weipeng; Li, Simin; Wu, Yuxiang

    2018-04-01

    To improve the precision of indoor positioning and actualize three-dimensional positioning, a reversed indoor positioning system based on visible light communication (VLC) using genetic algorithm (GA) is proposed. In order to solve the problem of interference between signal sources, CDMA modulation is used. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) code using CDMA modulation. Receiver receives mixed signal from every LED reference point, by the orthogonality of spreading code in CDMA modulation, ID information and intensity attenuation information from every LED can be obtained. According to positioning principle of received signal strength (RSS), the coordinate of the receiver can be determined. Due to system noise and imperfection of device utilized in the system, distance between receiver and transmitters will deviate from the real value resulting in positioning error. By introducing error correction factors to global parallel search of genetic algorithm, coordinates of the receiver in three-dimensional space can be determined precisely. Both simulation results and experimental results show that in practical application scenarios, the proposed positioning system can realize high precision positioning service.

  20. Two-dimensional time dependent Riemann solvers for neutron transport

    International Nuclear Information System (INIS)

    Brunner, Thomas A.; Holloway, James Paul

    2005-01-01

    A two-dimensional Riemann solver is developed for the spherical harmonics approximation to the time dependent neutron transport equation. The eigenstructure of the resulting equations is explored, giving insight into both the spherical harmonics approximation and the Riemann solver. The classic Roe-type Riemann solver used here was developed for one-dimensional problems, but can be used in multidimensional problems by treating each face of a two-dimensional computation cell in a locally one-dimensional way. Several test problems are used to explore the capabilities of both the Riemann solver and the spherical harmonics approximation. The numerical solution for a simple line source problem is compared to the analytic solution to both the P 1 equation and the full transport solution. A lattice problem is used to test the method on a more challenging problem

  1. Pricing High-Dimensional American Options Using Local Consistency Conditions

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and

  2. Rare event simulation in finite-infinite dimensional space

    International Nuclear Information System (INIS)

    Au, Siu-Kui; Patelli, Edoardo

    2016-01-01

    Modern engineering systems are becoming increasingly complex. Assessing their risk by simulation is intimately related to the efficient generation of rare failure events. Subset Simulation is an advanced Monte Carlo method for risk assessment and it has been applied in different disciplines. Pivotal to its success is the efficient generation of conditional failure samples, which is generally non-trivial. Conventionally an independent-component Markov Chain Monte Carlo (MCMC) algorithm is used, which is applicable to high dimensional problems (i.e., a large number of random variables) without suffering from ‘curse of dimension’. Experience suggests that the algorithm may perform even better for high dimensional problems. Motivated by this, for any given problem we construct an equivalent problem where each random variable is represented by an arbitrary (hence possibly infinite) number of ‘hidden’ variables. We study analytically the limiting behavior of the algorithm as the number of hidden variables increases indefinitely. This leads to a new algorithm that is more generic and offers greater flexibility and control. It coincides with an algorithm recently suggested by independent researchers, where a joint Gaussian distribution is imposed between the current sample and the candidate. The present work provides theoretical reasoning and insights into the algorithm.

  3. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    International Nuclear Information System (INIS)

    Hayashi, Y.; Hirose, Y.; Seno, Y.

    2016-01-01

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 "3 voxels was obtained.

  4. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Y., E-mail: y-hayashi@mosk.tytlabs.co.jp; Hirose, Y.; Seno, Y. [Toyota Central R& D Toyota Central R& D Labs., Inc., 41-1 Nagakute Aichi 480-1192 Japan (Japan)

    2016-07-27

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 {sup 3} voxels was obtained.

  5. Quantum key distribution session with 16-dimensional photonic states

    Science.gov (United States)

    Etcheverry, S.; Cañas, G.; Gómez, E. S.; Nogueira, W. A. T.; Saavedra, C.; Xavier, G. B.; Lima, G.

    2013-01-01

    The secure transfer of information is an important problem in modern telecommunications. Quantum key distribution (QKD) provides a solution to this problem by using individual quantum systems to generate correlated bits between remote parties, that can be used to extract a secret key. QKD with D-dimensional quantum channels provides security advantages that grow with increasing D. However, the vast majority of QKD implementations has been restricted to two dimensions. Here we demonstrate the feasibility of using higher dimensions for real-world quantum cryptography by performing, for the first time, a fully automated QKD session based on the BB84 protocol with 16-dimensional quantum states. Information is encoded in the single-photon transverse momentum and the required states are dynamically generated with programmable spatial light modulators. Our setup paves the way for future developments in the field of experimental high-dimensional QKD. PMID:23897033

  6. Inverse radiation problem of temperature distribution in one-dimensional isotropically scattering participating slab with variable refractive index

    International Nuclear Information System (INIS)

    Namjoo, A.; Sarvari, S.M. Hosseini; Behzadmehr, A.; Mansouri, S.H.

    2009-01-01

    In this paper, an inverse analysis is performed for estimation of source term distribution from the measured exit radiation intensities at the boundary surfaces in a one-dimensional absorbing, emitting and isotropically scattering medium between two parallel plates with variable refractive index. The variation of refractive index is assumed to be linear. The radiative transfer equation is solved by the constant quadrature discrete ordinate method. The inverse problem is formulated as an optimization problem for minimizing an objective function which is expressed as the sum of square deviations between measured and estimated exit radiation intensities at boundary surfaces. The conjugate gradient method is used to solve the inverse problem through an iterative procedure. The effects of various variables on source estimation are investigated such as type of source function, errors in the measured data and system parameters, gradient of refractive index across the medium, optical thickness, single scattering albedo and boundary emissivities. The results show that in the case of noisy input data, variation of system parameters may affect the inverse solution, especially at high error values in the measured data. The error in measured data plays more important role than the error in radiative system parameters except the refractive index distribution; however the accuracy of source estimation is very sensitive toward error in refractive index distribution. Therefore, refractive index distribution and measured exit intensities should be measured accurately with a limited error bound, in order to have an accurate estimation of source term in a graded index medium.

  7. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  8. Vertical drying of a suspension of sticks: Monte Carlo simulation for continuous two-dimensional problem

    Science.gov (United States)

    Lebovka, Nikolai I.; Tarasevich, Yuri Yu.; Vygornitskii, Nikolai V.

    2018-02-01

    The vertical drying of a two-dimensional colloidal film containing zero-thickness sticks (lines) was studied by means of kinetic Monte Carlo (MC) simulations. The continuous two-dimensional problem for both the positions and orientations was considered. The initial state before drying was produced using a model of random sequential adsorption with isotropic orientations of the sticks. During the evaporation, an upper interface falls with a linear velocity in the vertical direction, and the sticks undergo translational and rotational Brownian motions. The MC simulations were run at different initial number concentrations (the numbers of sticks per unit area), pi, and solvent evaporation rates, u . For completely dried films, the spatial distributions of the sticks, the order parameters, and the electrical conductivities of the films in both the horizontal, x , and vertical, y , directions were examined. Significant evaporation-driven self-assembly and stratification of the sticks in the vertical direction was observed. The extent of stratification increased with increasing values of u . The anisotropy of the electrical conductivity of the film can be finely regulated by changes in the values of pi and u .

  9. Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox

    Science.gov (United States)

    Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas

    In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.

  10. Preface [HD3-2015: International meeting on high-dimensional data-driven science

    International Nuclear Information System (INIS)

    2016-01-01

    A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)

  11. Sensitivity analysis of numerical results of one- and two-dimensional advection-diffusion problems

    International Nuclear Information System (INIS)

    Motoyama, Yasunori; Tanaka, Nobuatsu

    2005-01-01

    Numerical simulation has been playing an increasingly important role in the fields of science and engineering. However, every numerical result contains errors such as modeling, truncation, and computing errors, and the magnitude of the errors that are quantitatively contained in the results is unknown. This situation causes a large design margin in designing by analyses and prevents further cost reduction by optimizing design. To overcome this situation, we developed a new method to numerically analyze the quantitative error of a numerical solution by using the sensitivity analysis method and modified equation approach. If a reference case of typical parameters is calculated once by this method, then no additional calculation is required to estimate the results of other numerical parameters such as those of parameters with higher resolutions. Furthermore, we can predict the exact solution from the sensitivity analysis results and can quantitatively evaluate the error of numerical solutions. Since the method incorporates the features of the conventional sensitivity analysis method, it can evaluate the effect of the modeling error as well as the truncation error. In this study, we confirm the effectiveness of the method through some numerical benchmark problems of one- and two-dimensional advection-diffusion problems. (author)

  12. Transport synthetic acceleration scheme for multi-dimensional neutron transport problems

    Energy Technology Data Exchange (ETDEWEB)

    Modak, R S; Kumar, Vinod; Menon, S V.G. [Theoretical Physics Div., Bhabha Atomic Research Centre, Mumbai (India); Gupta, Anurag [Reactor Physics Design Div., Bhabha Atomic Research Centre, Mumbai (India)

    2005-09-15

    The numerical solution of linear multi-energy-group neutron transport equation is required in several analyses in nuclear reactor physics and allied areas. Computer codes based on the discrete ordinates (Sn) method are commonly used for this purpose. These codes solve external source problem and K-eigenvalue problem. The overall solution technique involves solution of source problem in each energy group as intermediate procedures. Such a single-group source problem is solved by the so-called Source Iteration (SI) method. As is well-known, the SI-method converges very slowly for optically thick and highly scattering regions, leading to large CPU times. Over last three decades, many schemes have been tried to accelerate the SI; the most prominent being the Diffusion Synthetic Acceleration (DSA) scheme. The DSA scheme, however, often fails and is also rather difficult to implement. In view of this, in 1997, Ramone and others have developed a new acceleration scheme called Transport Synthetic Acceleration (TSA) which is much more robust and easy to implement. This scheme has been recently incorporated in 2-D and 3-D in-house codes at BARC. This report presents studies on the utility of TSA scheme for fairly general test problems involving many energy groups and anisotropic scattering. The scheme is found to be useful for problems in Cartesian as well as Cylindrical geometry. (author)

  13. Transport synthetic acceleration scheme for multi-dimensional neutron transport problems

    International Nuclear Information System (INIS)

    Modak, R.S.; Vinod Kumar; Menon, S.V.G.; Gupta, Anurag

    2005-09-01

    The numerical solution of linear multi-energy-group neutron transport equation is required in several analyses in nuclear reactor physics and allied areas. Computer codes based on the discrete ordinates (Sn) method are commonly used for this purpose. These codes solve external source problem and K-eigenvalue problem. The overall solution technique involves solution of source problem in each energy group as intermediate procedures. Such a single-group source problem is solved by the so-called Source Iteration (SI) method. As is well-known, the SI-method converges very slowly for optically thick and highly scattering regions, leading to large CPU times. Over last three decades, many schemes have been tried to accelerate the SI; the most prominent being the Diffusion Synthetic Acceleration (DSA) scheme. The DSA scheme, however, often fails and is also rather difficult to implement. In view of this, in 1997, Ramone and others have developed a new acceleration scheme called Transport Synthetic Acceleration (TSA) which is much more robust and easy to implement. This scheme has been recently incorporated in 2-D and 3-D in-house codes at BARC. This report presents studies on the utility of TSA scheme for fairly general test problems involving many energy groups and anisotropic scattering. The scheme is found to be useful for problems in Cartesian as well as Cylindrical geometry. (author)

  14. Three-dimensional true FISP for high-resolution imaging of the whole brain

    International Nuclear Information System (INIS)

    Schmitz, B.; Hagen, T.; Reith, W.

    2003-01-01

    While high-resolution T1-weighted sequences, such as three-dimensional magnetization-prepared rapid gradient-echo imaging, are widely available, there is a lack of an equivalent fast high-resolution sequence providing T2 contrast. Using fast high-performance gradient systems we show the feasibility of three-dimensional true fast imaging with steady-state precession (FISP) to fill this gap. We applied a three-dimensional true-FISP protocol with voxel sizes down to 0.5 x 0.5 x 0.5 mm and acquisition times of approximately 8 min on a 1.5-T Sonata (Siemens, Erlangen, Germany) magnetic resonance scanner. The sequence was included into routine brain imaging protocols for patients with cerebrospinal-fluid-related intracranial pathology. Images from 20 patients and 20 healthy volunteers were evaluated by two neuroradiologists with respect to diagnostic image quality and artifacts. All true-FISP scans showed excellent imaging quality free of artifacts in patients and volunteers. They were valuable for the assessment of anatomical and pathologic aspects of the included patients. High-resolution true-FISP imaging is a valuable adjunct for the exploration and neuronavigation of intracranial pathologies especially if cerebrospinal fluid is involved. (orig.)

  15. A three-dimensional neutron transport benchmark solution

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Kornreich, D.E.

    1993-01-01

    For one-group neutron transport theory in one dimension, several powerful analytical techniques have been developed to solve the neutron transport equation, including Caseology, Wiener-Hopf factorization, and Fourier and Laplace transform methods. In addition, after a Fourier transform in the transverse plane and formulation of a pseudo problem, two-dimensional (2-D) and three-dimensional (3-D) problems can be solved using the techniques specifically developed for the one-dimensional (1-D) case. Numerical evaluation of the resulting expressions requiring an inversion in the transverse plane have been successful for 2-D problems but becomes exceedingly difficult in the 3-D case. In this paper, we show that by using the symmetry along the beam direction, a 2-D problem can be transformed into a 3-D problem in an infinite medium. The numerical solution to the 3-D problem is then demonstrated. Thus, a true 3-D transport benchmark solution can be obtained from a well-established numerical solution to a 2-D problem

  16. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  17. Low dimensional field theories and condensed matter physics

    International Nuclear Information System (INIS)

    Nagaoka, Yosuke

    1992-01-01

    This issue is devoted to the Proceedings of the Fourth Yukawa International Seminar (YKIS '91) on Low Dimensional Field Theories and Condensed Matter Physics, which was held on July 28 to August 3 in Kyoto. In recent years there have been great experimental discoveries in the field of condensed matter physics: the quantum Hall effect and the high temperature superconductivity. Theoretical effort to clarify mechanisms of these phenomena revealed that they are deeply related to the basic problem of many-body systems with strong correlation. On the other hand, there have been important developments in field theory in low dimensions: the conformal field theory, the Chern-Simons gauge theory, etc. It was found that these theories work as a powerful method of approach to the problems in condensed matter physics. YKIS '91 was devoted to the study of common problems in low dimensional field theories and condensed matter physics. The 17 of the presented papers are collected in this issue. (J.P.N.)

  18. Construction of high-dimensional universal quantum logic gates using a Λ system coupled with a whispering-gallery-mode microresonator.

    Science.gov (United States)

    He, Ling Yan; Wang, Tie-Jun; Wang, Chuan

    2016-07-11

    High-dimensional quantum system provides a higher capacity of quantum channel, which exhibits potential applications in quantum information processing. However, high-dimensional universal quantum logic gates is difficult to achieve directly with only high-dimensional interaction between two quantum systems and requires a large number of two-dimensional gates to build even a small high-dimensional quantum circuits. In this paper, we propose a scheme to implement a general controlled-flip (CF) gate where the high-dimensional single photon serve as the target qudit and stationary qubits work as the control logic qudit, by employing a three-level Λ-type system coupled with a whispering-gallery-mode microresonator. In our scheme, the required number of interaction times between the photon and solid state system reduce greatly compared with the traditional method which decomposes the high-dimensional Hilbert space into 2-dimensional quantum space, and it is on a shorter temporal scale for the experimental realization. Moreover, we discuss the performance and feasibility of our hybrid CF gate, concluding that it can be easily extended to a 2n-dimensional case and it is feasible with current technology.

  19. The discrete cones methods for two-dimensional neutral particle transport problems with voids

    International Nuclear Information System (INIS)

    Watanabe, Y.; Maynard, C.W.

    1983-01-01

    One of the most widely applied deterministic methods for time-independent, two-dimensional neutron transport calculations is the discrete ordinates method (DSN). The DSN solution, however, fails to be accurate in a void due to the ray effect. In order to circumvent this drawback, the authors have been developing a novel approximation: the discrete cones method (DCN), where a group of particles in a cone are simultaneously traced instead of particles in discrete directions for the DSN method. Programs, which apply to the DSN method in a non-vacuum region and the DCN method in a void, have been written for transport calculations in X-Y coordinates. The solutions for test problems demonstrate mitigation of the ray effect in voids without loosing the computational efficiency of the DSN method

  20. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    Science.gov (United States)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  1. Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement

    Directory of Open Access Journals (Sweden)

    Xianglei Liu

    2018-01-01

    Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.

  2. Secure data storage by three-dimensional absorbers in highly scattering volume medium

    International Nuclear Information System (INIS)

    Matoba, Osamu; Matsuki, Shinichiro; Nitta, Kouichi

    2008-01-01

    A novel data storage in a volume medium with highly scattering coefficient is proposed for data security application. Three-dimensional absorbers are used as data. These absorbers can not be measured by interferometer when the scattering in a volume medium is strong enough. We present a method to reconstruct three-dimensional absorbers and present numerical results to show the effectiveness of the proposed data storage.

  3. Heuristics for Multidimensional Packing Problems

    DEFF Research Database (Denmark)

    Egeblad, Jens

    for a minimum height container required for the items. The main contributions of the thesis are three new heuristics for strip-packing and knapsack packing problems where items are both rectangular and irregular. In the two first papers we describe a heuristic for the multidimensional strip-packing problem...... that is based on a relaxed placement principle. The heuristic starts with a random overlapping placement of items and large container dimensions. From the overlapping placement overlap is reduced iteratively until a non-overlapping placement is found and a new problem is solved with a smaller container size...... of this heuristic are among the best published in the literature both for two- and three-dimensional strip-packing problems for irregular shapes. In the third paper, we introduce a heuristic for two- and three-dimensional rectangular knapsack packing problems. The two-dimensional heuristic uses the sequence pair...

  4. Multi-dimensional Analysis for SLB Transient in ATLAS Facility as Activity of DSP (Domestic Standard Problem)

    International Nuclear Information System (INIS)

    Bae, B. U.; Park, Y. S.; Kim, J. R.; Kang, K. H.; Choi, K. Y.; Sung, H. J.; Hwang, M. J.; Kang, D. H.; Lim, S. G.; Jun, S. S.

    2015-01-01

    Participants of DSP-03 were divided in three groups and each group has focused on the specific subject related to the enhancement of the code analysis. The group A tried to investigate scaling capability of ATLAS test data by comparing to the code analysis for a prototype, and the group C studied to investigate effect of various models in the one-dimensional codes. This paper briefly summarizes the code analysis result from the group B participants in the DSP-03 of the ATLAS test facility. The code analysis by Group B focuses highly on investigating the multi-dimensional thermal hydraulic phenomena in the ATLAS facility during the SLB transient. Even though the one-dimensional system analysis code cannot simulate the whole system of the ATLAS facility with a nodalization of the CFD (Computational Fluid Dynamics) scale, a reactor pressure vessel can be considered with multi-dimensional components to reflect the thermal mixing phenomena inside a downcomer and a core. Also, the CFD could give useful information for understanding complex phenomena in specific components such as the reactor pressure vessel. From the analysis activity of Group B in ATLAS DSP-03, participants adopted a multi-dimensional approach to the code analysis for the SLB transient in the ATLAS test facility. The main purpose of the analysis was to investigate prediction capability of multi-dimensional analysis tools for the SLB experiment result. In particular, the asymmetric cooling and thermal mixing phenomena in the reactor pressure vessel could be significantly focused for modeling the multi-dimensional components

  5. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm `boundary separated checkerboard sweep method` appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it`s similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  6. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    International Nuclear Information System (INIS)

    Okumura, Keisuke

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm 'boundary separated checkerboard sweep method' appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it's similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  7. The moment problem

    CERN Document Server

    Schmüdgen, Konrad

    2017-01-01

    This advanced textbook provides a comprehensive and unified account of the moment problem. It covers the classical one-dimensional theory and its multidimensional generalization, including modern methods and recent developments. In both the one-dimensional and multidimensional cases, the full and truncated moment problems are carefully treated separately. Fundamental concepts, results and methods are developed in detail and accompanied by numerous examples and exercises. Particular attention is given to powerful modern techniques such as real algebraic geometry and Hilbert space operators. A wide range of important aspects are covered, including the Nevanlinna parametrization for indeterminate moment problems, canonical and principal measures for truncated moment problems, the interplay between Positivstellensätze and moment problems on semi-algebraic sets, the fibre theorem, multidimensional determinacy theory, operator-theoretic approaches, and the existence theory and important special topics of multidime...

  8. H-infinity Tracking Problems for a Distributed Parameter System

    DEFF Research Database (Denmark)

    Larsen, Mikael

    1997-01-01

    The thesis considers the problem of finding a finite dimensional controller for an infinite dimensional system (A tunnel Pasteurizer) combinedwith a rubustness analysis.......The thesis considers the problem of finding a finite dimensional controller for an infinite dimensional system (A tunnel Pasteurizer) combinedwith a rubustness analysis....

  9. High-dimensional quantum cryptography with twisted light

    International Nuclear Information System (INIS)

    Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J

    2015-01-01

    Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)

  10. A Near-linear Time Approximation Algorithm for Angle-based Outlier Detection in High-dimensional Data

    DEFF Research Database (Denmark)

    Pham, Ninh Dang; Pagh, Rasmus

    2012-01-01

    projection-based technique that is able to estimate the angle-based outlier factor for all data points in time near-linear in the size of the data. Also, our approach is suitable to be performed in parallel environment to achieve a parallel speedup. We introduce a theoretical analysis of the quality...... neighbor are deteriorated in high-dimensional data. Following up on the work of Kriegel et al. (KDD '08), we investigate the use of angle-based outlier factor in mining high-dimensional outliers. While their algorithm runs in cubic time (with a quadratic time heuristic), we propose a novel random......Outlier mining in d-dimensional point sets is a fundamental and well studied data mining task due to its variety of applications. Most such applications arise in high-dimensional domains. A bottleneck of existing approaches is that implicit or explicit assessments on concepts of distance or nearest...

  11. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    Science.gov (United States)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  12. A student's guide to dimensional analysis

    CERN Document Server

    Lemons, Don S

    2017-01-01

    This introduction to dimensional analysis covers the methods, history and formalisation of the field, and provides physics and engineering applications. Covering topics from mechanics, hydro- and electrodynamics to thermal and quantum physics, it illustrates the possibilities and limitations of dimensional analysis. Introducing basic physics and fluid engineering topics through the mathematical methods of dimensional analysis, this book is perfect for students in physics, engineering and mathematics. Explaining potentially unfamiliar concepts such as viscosity and diffusivity, the text includes worked examples and end-of-chapter problems with answers provided in an accompanying appendix, which help make it ideal for self-study. Long-standing methodological problems arising in popular presentations of dimensional analysis are also identified and solved, making the book a useful text for advanced students and professionals.

  13. Linear stability theory as an early warning sign for transitions in high dimensional complex systems

    International Nuclear Information System (INIS)

    Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft

    2016-01-01

    We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)

  14. A memory efficient method for fully three-dimensional object reconstruction with HAADF STEM

    International Nuclear Information System (INIS)

    Van den Broek, W.; Rosenauer, A.; Van Aert, S.; Sijbers, J.; Van Dyck, D.

    2014-01-01

    The conventional approach to object reconstruction through electron tomography is to reduce the three-dimensional problem to a series of independent two-dimensional slice-by-slice reconstructions. However, at atomic resolution the image of a single atom extends over many such slices and incorporating this image as prior knowledge in tomography or depth sectioning therefore requires a fully three-dimensional treatment. Unfortunately, the size of the three-dimensional projection operator scales highly unfavorably with object size and readily exceeds the available computer memory. In this paper, it is shown that for incoherent image formation the memory requirement can be reduced to the fundamental lower limit of the object size, both for tomography and depth sectioning. Furthermore, it is shown through multislice calculations that high angle annular dark field scanning transmission electron microscopy can be sufficiently incoherent for the reconstruction of single element nanocrystals, but that dynamical diffraction effects can cause classification problems if more than one element is present. - Highlights: • The full 3D approach to atomic resolution object retrieval has high memory load. • For incoherent imaging the projection process is a matrix–vector product. • Carrying out this product implicitly as Fourier transforms reduces memory load. • Reconstructions are demonstrated from HAADF STEM and depth sectioning simulations

  15. High-dimensional quantum channel estimation using classical light

    CSIR Research Space (South Africa)

    Mabena, Chemist M

    2017-11-01

    Full Text Available stream_source_info Mabena_20007_2017.pdf.txt stream_content_type text/plain stream_size 960 Content-Encoding UTF-8 stream_name Mabena_20007_2017.pdf.txt Content-Type text/plain; charset=UTF-8 PHYSICAL REVIEW A 96, 053860... (2017) High-dimensional quantum channel estimation using classical light Chemist M. Mabena CSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa and School of Physics, University of the Witwatersrand, Johannesburg 2000, South...

  16. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    Science.gov (United States)

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  17. High dimensional ICA analysis detects within-network functional connectivity damage of default mode and sensory motor networks in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Ottavia eDipasquale

    2015-02-01

    Full Text Available High dimensional independent component analysis (ICA, compared to low dimensional ICA, allows performing a detailed parcellation of the resting state networks. The purpose of this study was to give further insight into functional connectivity (FC in Alzheimer’s disease (AD using high dimensional ICA. For this reason, we performed both low and high dimensional ICA analyses of resting state fMRI (rfMRI data of 20 healthy controls and 21 AD patients, focusing on the primarily altered default mode network (DMN and exploring the sensory motor network (SMN. As expected, results obtained at low dimensionality were in line with previous literature. Moreover, high dimensional results allowed us to observe either the presence of within-network disconnections and FC damage confined to some of the resting state sub-networks. Due to the higher sensitivity of the high dimensional ICA analysis, our results suggest that high-dimensional decomposition in sub-networks is very promising to better localize FC alterations in AD and that FC damage is not confined to the default mode network.

  18. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  19. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  20. Computational methods in calculating superconducting current problems

    Science.gov (United States)

    Brown, David John, II

    Various computational problems in treating superconducting currents are examined. First, field inversion in spatial Fourier transform space is reviewed to obtain both one-dimensional transport currents flowing down a long thin tape, and a localized two-dimensional current. The problems associated with spatial high-frequency noise, created by finite resolution and experimental equipment, are presented, and resolved with a smooth Gaussian cutoff in spatial frequency space. Convergence of the Green's functions for the one-dimensional transport current densities is discussed, and particular attention is devoted to the negative effects of performing discrete Fourier transforms alone on fields asymptotically dropping like 1/r. Results of imaging simulated current densities are favorably compared to the original distributions after the resulting magnetic fields undergo the imaging procedure. The behavior of high-frequency spatial noise, and the behavior of the fields with a 1/r asymptote in the imaging procedure in our simulations is analyzed, and compared to the treatment of these phenomena in the published literature. Next, we examine calculation of Mathieu and spheroidal wave functions, solutions to the wave equation in elliptical cylindrical and oblate and prolate spheroidal coordinates, respectively. These functions are also solutions to Schrodinger's equations with certain potential wells, and are useful in solving time-varying superconducting problems. The Mathieu functions are Fourier expanded, and the spheroidal functions expanded in associated Legendre polynomials to convert the defining differential equations to recursion relations. The infinite number of linear recursion equations is converted to an infinite matrix, multiplied by a vector of expansion coefficients, thus becoming an eigenvalue problem. The eigenvalue problem is solved with root solvers, and the eigenvector problem is solved using a Jacobi-type iteration method, after preconditioning the

  1. The development of high performance numerical simulation code for transient groundwater flow and reactive solute transport problems based on local discontinuous Galerkin method

    International Nuclear Information System (INIS)

    Suzuki, Shunichi; Motoshima, Takayuki; Naemura, Yumi; Kubo, Shin; Kanie, Shunji

    2009-01-01

    The authors develop a numerical code based on Local Discontinuous Galerkin Method for transient groundwater flow and reactive solute transport problems in order to make it possible to do three dimensional performance assessment on radioactive waste repositories at the earliest stage possible. Local discontinuous Galerkin Method is one of mixed finite element methods which are more accurate ones than standard finite element methods. In this paper, the developed numerical code is applied to several problems which are provided analytical solutions in order to examine its accuracy and flexibility. The results of the simulations show the new code gives highly accurate numeric solutions. (author)

  2. Vibrations of thin piezoelectric shallow shells: Two-dimensional ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    In this paper we consider the eigenvalue problem for piezoelectric shallow shells and we show that, as the thickness of the shell goes to zero, the eigensolutions of the three-dimensional piezoelectric shells converge to the eigensolutions of a two- dimensional eigenvalue problem. Keywords. Vibrations; piezoelectricity ...

  3. Four-dimensional (4D) tracking of high-temperature microparticles

    International Nuclear Information System (INIS)

    Wang, Zhehui; Liu, Q.; Waganaar, W.; Fontanese, J.; James, D.; Munsat, T.

    2016-01-01

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  4. A two-dimensional finite element method for analysis of solid body contact problems in fuel rod mechanics

    International Nuclear Information System (INIS)

    Nissen, K.L.

    1988-06-01

    Two computer codes for the analysis of fuel rod behavior have been developed. Fuel rod mechanics is treated by a two-dimensional, axisymmetric finite element method. The program KONTAKT is used for detailed examinations on fuel rod sections, whereas the second program METHOD2D allows instationary calculations of whole fuel rods. The mechanical contact of fuel and cladding during heating of the fuel rod is very important for it's integrity. Both computer codes use a Newton-Raphson iteration for the solution of the nonlinear solid body contact problem. A constitutive equation is applied for the dependency of contact pressure on normal approach of the surfaces which are assumed to be rough. If friction is present on the contacting surfaces, Coulomb's friction law is used. Code validation is done by comparison with known analytical solutions for special problems. Results of the contact algorithm for an elastic ball pressing against a rigid surface are confronted with Hertzian theory. Influences of fuel-pellet geometry as well as influences of discretisation of displacements and stresses of a single fuel pellet are studied. Contact of fuel and cladding is calculated for a fuel rod section with two fuel pellets. The influence of friction forces between fuel and cladding on their axial expansion is demonstrated. By calculation of deformations and temperatures during an instationary fuel rod experiment of the CABRI-series the feasibility of two-dimensional finite element analysis of whole fuel rods is shown. (orig.) [de

  5. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J. [Strasbourg-1 Univ., 67 (France). Centre de Recherches Nucleaires

    1992-12-31

    A new generation of high resolution {gamma}-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold {gamma}-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs.

  6. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.

    1992-01-01

    A new generation of high resolution γ-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold γ-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs

  7. Numerical model for the solution of two-dimensional natural convection problems in arbitrary cavities

    International Nuclear Information System (INIS)

    Milioli, F.E.

    1985-01-01

    In this research work a numerical model for the solution of two-dimensional natural convection problems in arbitrary cavities of a Boussinesq fluid is presented. The conservation equations are written in a general curvilinear coordinate system which matches the irregular boundaries of the domain. The nonorthogonal system is generated by a suitable system of elliptic equations. The momentum and continuity equations are transformed from the Cartesian system to the general curvilinear system keeping the Cartesian velocity components as the dependent variables in the transformed domain. Finite difference equations are obtained for the contravariant velocity components in the transformed domain. The numerical calculations are performed in a fixed rectangular domain and both the Cartesian and the contravariant velocity components take part in the solutiomn procedure. The dependent variables are arranged on the grid in a staggered manner. The numerical model is tested by solving the driven flow in a square cavity with a moving side using a nonorthogoanl grid. The natural convenction in a square cavity, using an orthogonal and a nonorthogonal grid, is also solved for the model test. Also, the solution for the buoyancy flow between a square cylinder placed inside a circular cylinder is presented. The results of the test problems are compared with those available in the specialized literature. Finally, in order to show the generality of the model, the natural convection problem inside a very irregular cavity is presented. (Author) [pt

  8. Effect of Rotation for Two-Temperature Generalized Thermoelasticity of Two-Dimensional under Thermal Shock Problem

    Directory of Open Access Journals (Sweden)

    Kh. Lotfy

    2013-01-01

    Full Text Available The theory of two-temperature generalized thermoelasticity based on the theory of Youssef is used to solve boundary value problems of two-dimensional half-space. The governing equations are solved using normal mode method under the purview of the Lord-Şhulman (LS and the classical dynamical coupled theory (CD. The general solution obtained is applied to a specific problem of a half-space subjected to one type of heating, the thermal shock type. We study the influence of rotation on the total deformation of thermoelastic half-space and the interaction with each other under the influence of two temperature theory. The material is homogeneous isotropic elastic half-space. The methodology applied here is use of the normal mode analysis techniques that are used to solve the resulting nondimensional coupled field equations for the two theories. Numerical results for the displacement components, force stresses, and temperature distribution are presented graphically and discussed. The conductive temperature, the dynamical temperature, the stress, and the strain distributions are shown graphically with some comparisons.

  9. Three Dimensional Energy Transmitting Boundary in the Time Domain

    Directory of Open Access Journals (Sweden)

    Naohiro eNakamura

    2015-11-01

    Full Text Available Although the energy transmitting boundary is accurate and efficient for the FEM earthquake response analysis, it could be applied in the frequency domain only. In the previous papers, the author proposed an earthquake response analysis method using the time domain energy transmitting boundary for two dimensional problems. In this paper, this technique is expanded for three dimensional problems. The inner field is supposed to be a hexahedron shape and the approximate time domain boundary is explained, first. Next, two dimensional anti-plane time domain boundary is studied for a part of the approximate three dimensional boundary method. Then, accuracy and efficiency of the proposed method are confirmed by example problems.

  10. Classical solutions of two dimensional Stokes problems on non smooth domains. 1: The Radon integral operators

    International Nuclear Information System (INIS)

    Lubuma, M.S.

    1991-05-01

    The applicability of the Neumann indirect method of potentials to the Dirichlet and Neumann problems for the two-dimensional Stokes operator on a non smooth boundary Γ is subject to two kinds of sufficient and/or necessary conditions on Γ. The first one, occurring in electrostatic, is equivalent to the boundedness on C(Γ) of the velocity double layer potential W as well as to the existence of jump relations of potentials. The second condition, which forces Γ to be a simple rectifiable curve and which, compared to the Laplacian, is a stronger restriction on the corners of Γ, states that the Fredholm radius of W is greater than 2. Under these conditions, the Radon boundary integral equations defined by the above mentioned jump relations are solvable by the Fredholm theory; the double (for Dirichlet) and the single (for Neumann) layer potentials corresponding to their solutions are classical solutions of the Stokes problems. (author). 48 refs

  11. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  12. HDclassif : An R Package for Model-Based Clustering and Discriminant Analysis of High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Laurent Berge

    2012-01-01

    Full Text Available This paper presents the R package HDclassif which is devoted to the clustering and the discriminant analysis of high-dimensional data. The classification methods proposed in the package result from a new parametrization of the Gaussian mixture model which combines the idea of dimension reduction and model constraints on the covariance matrices. The supervised classification method using this parametrization is called high dimensional discriminant analysis (HDDA. In a similar manner, the associated clustering method iscalled high dimensional data clustering (HDDC and uses the expectation-maximization algorithm for inference. In order to correctly t the data, both methods estimate the specific subspace and the intrinsic dimension of the groups. Due to the constraints on the covariance matrices, the number of parameters to estimate is significantly lower than other model-based methods and this allows the methods to be stable and efficient in high dimensions. Two introductory examples illustrated with R codes allow the user to discover the hdda and hddc functions. Experiments on simulated and real datasets also compare HDDC and HDDA with existing classification methods on high-dimensional datasets. HDclassif is a free software and distributed under the general public license, as part of the R software project.

  13. Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Weixun Zhou

    2017-05-01

    Full Text Available Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the complexity of remote sensing images. In this paper, we investigate how to extract deep feature representations based on convolutional neural networks (CNNs for high-resolution remote sensing image retrieval (HRRSIR. To this end, several effective schemes are proposed to generate powerful feature representations for HRRSIR. In the first scheme, a CNN pre-trained on a different problem is treated as a feature extractor since there are no sufficiently-sized remote sensing datasets to train a CNN from scratch. In the second scheme, we investigate learning features that are specific to our problem by first fine-tuning the pre-trained CNN on a remote sensing dataset and then proposing a novel CNN architecture based on convolutional layers and a three-layer perceptron. The novel CNN has fewer parameters than the pre-trained and fine-tuned CNNs and can learn low dimensional features from limited labelled images. The schemes are evaluated on several challenging, publicly available datasets. The results indicate that the proposed schemes, particularly the novel CNN, achieve state-of-the-art performance.

  14. A three-dimensional Dirichlet-to-Neumann operator for water waves over topography

    Science.gov (United States)

    Andrade, D.; Nachbin, A.

    2018-06-01

    Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.

  15. One- and two-dimensional sublattices as preconditions for high-Tc superconductivity

    International Nuclear Information System (INIS)

    Krueger, E.

    1989-01-01

    In an earlier paper it was proposed describing superconductivity in the framework of a nonadiabatic Heisenberg model in order to interprete the outstanding symmetry proper ties of the (spin-dependent) Wannier functions in the conduction bands of superconductors. This new group-theoretical model suggests that Cooper pair formation can only be mediated by boson excitations carrying crystal-spin-angular momentum. While in the three-dimensionally isotropic lattices of the standard superconductors phonons are able to transport crystal-spin-angular momentum, this is not true for phonons propagating through the one- or two-dimensional Cu-O sublattices of the high-T c compounds. Therefore, if such an anisotropic material is superconducting, it is necessarily higher-energetic excitations (of well-defined symmetry) which mediate pair formation. This fact is proposed being responsible for the high transition temperatures of these compounds. (author)

  16. Exactly soluble problems in statistical mechanics

    International Nuclear Information System (INIS)

    Yang, C.N.

    1983-01-01

    In the last few years, a number of two-dimensional classical and one-dimensional quantum mechanical problems in statistical mechanics have been exactly solved. Although these problems range over models of diverse physical interest, their solutions were obtained using very similar mathematical methods. In these lectures, the main points of the methods are discussed. In this introductory lecture, an overall survey of all these problems without going into the detailed method of solution is given. In later lectures, they shall concentrate on one particular problem: the delta function interaction in one dimension, and go into the details of that problem

  17. Three-dimensional CT imaging of soft-tissue anatomy

    International Nuclear Information System (INIS)

    Fishman, E.K.; Ney, D.R.; Magid, D.; Kuhlman, J.E.

    1988-01-01

    Three-dimensional display of computed tomographic data has been limited to skeletal structures. This was in part related to the reconstruction algorithm used, which relied on a binary classification scheme. A new algorithm, volumetric rendering with percentage classification, provides the ability to display three-dimensional images of muscle and soft tissue. A review was conducted of images in 35 cases in which muscle and/or soft tissue were part of the clinical problem. In all cases, individual muscle groups could be clearly identified and discriminated. Branching vessels in the range of 2.3 mm could be identified. Similarly, lymph nodes could be clearly defined. High-resolution three-dimensional images were found to be useful both in providing an increased understanding of complex muscle and soft tissue anatomy and in surgical planning

  18. NLSEmagic: Nonlinear Schrödinger equation multi-dimensional Matlab-based GPU-accelerated integrators using compact high-order schemes

    Science.gov (United States)

    Caplan, R. M.

    2013-04-01

    We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time

  19. Indirect boundary element method for three dimensional problems. Analytical solution for contribution to wave field by triangular element; Sanjigen kansetsu kyokai yosoho. Sankakukei yoso no kiyo no kaisekikai

    Energy Technology Data Exchange (ETDEWEB)

    Yokoi, T [Building Research Institute, Tokyo (Japan); Sanchez-Sesma, F [Universidad National Autonoma de Mexico, (Mexico). Institute de Ingenieria

    1997-05-27

    Formulation is introduced for discretizing a boundary integral equation into an indirect boundary element method for the solution of 3-dimensional topographic problems. Yokoi and Takenaka propose an analytical solution-capable reference solution (solution for the half space elastic body with flat free surface) to problems of topographic response to seismic motion in a 2-dimensional in-plane field. That is to say, they propose a boundary integral equation capable of effectively suppressing the non-physical waves that emerge in the result of computation in the wake of the truncation of the discretized ground surface making use of the wave field in a semi-infinite elastic body with flat free surface. They apply the proposed boundary integral equation discretized into the indirect boundary element method to solve some examples, and succeed in proving its validity. In this report, the equation is expanded to deal with 3-dimensional topographic problems. A problem of a P-wave vertically landing on a flat and free surface is solved by the conventional boundary integral equation and the proposed boundary integral equation, and the solutions are compared with each other. It is found that the new method, different from the conventional one, can delete non-physical waves from the analytical result. 4 figs.

  20. High-dimensional quantum key distribution based on multicore fiber using silicon photonic integrated circuits

    DEFF Research Database (Denmark)

    Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld

    2017-01-01

    is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...

  1. Two-Dimensional High Definition Versus Three-Dimensional Endoscopy in Endonasal Skull Base Surgery: A Comparative Preclinical Study.

    Science.gov (United States)

    Rampinelli, Vittorio; Doglietto, Francesco; Mattavelli, Davide; Qiu, Jimmy; Raffetti, Elena; Schreiber, Alberto; Villaret, Andrea Bolzoni; Kucharczyk, Walter; Donato, Francesco; Fontanella, Marco Maria; Nicolai, Piero

    2017-09-01

    Three-dimensional (3D) endoscopy has been recently introduced in endonasal skull base surgery. Only a relatively limited number of studies have compared it to 2-dimensional, high definition technology. The objective was to compare, in a preclinical setting for endonasal endoscopic surgery, the surgical maneuverability of 2-dimensional, high definition and 3D endoscopy. A group of 68 volunteers, novice and experienced surgeons, were asked to perform 2 tasks, namely simulating grasping and dissection surgical maneuvers, in a model of the nasal cavities. Time to complete the tasks was recorded. A questionnaire to investigate subjective feelings during tasks was filled by each participant. In 25 subjects, the surgeons' movements were continuously tracked by a magnetic-based neuronavigator coupled with dedicated software (ApproachViewer, part of GTx-UHN) and the recorded trajectories were analyzed by comparing jitter, sum of square differences, and funnel index. Total execution time was significantly lower with 3D technology (P < 0.05) in beginners and experts. Questionnaires showed that beginners preferred 3D endoscopy more frequently than experts. A minority (14%) of beginners experienced discomfort with 3D endoscopy. Analysis of jitter showed a trend toward increased effectiveness of surgical maneuvers with 3D endoscopy. Sum of square differences and funnel index analyses documented better values with 3D endoscopy in experts. In a preclinical setting for endonasal skull base surgery, 3D technology appears to confer an advantage in terms of time of execution and precision of surgical maneuvers. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Relativity and the dimensionality of the world

    CERN Document Server

    2007-01-01

    All physicists would agree that one of the most fundamental problems of the 21st century physics is the dimensionality of the world. In the four-dimensional world of Minkowski (or Minkowski spacetime) the most challenging problem is the nature of the temporal dimension. In Minkowski spacetime it is merely one of the four dimensions, which means that it is entirely given like the other three spacial dimensions. If the temporal dimension were not given in its entirety and only one constantly changing moment of it existed, Minkowski spacetime would be reduced to the ordinary three-dimensional space. But if the physical world, represented by Minkowski spacetime, is indeed four-dimensional with time being the fourth dimension, then such a world is drastically different from its image based on our perceptions. Minkowski four-dimensional world is a block Universe, a frozen world in which nothing happens since all moments of time are given ‘at once', which means that physical bodies are four-dimensional worldtubes ...

  3. MARG2D code. 1. Eigenvalue problem for two dimensional Newcomb equation

    Energy Technology Data Exchange (ETDEWEB)

    Tokuda, Shinji [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment; Watanabe, Tomoko

    1997-10-01

    A new method and a code MARG2D have been developed to solve the 2-dimensional Newcomb equation which plays an important role in the magnetohydrodynamic (MHD) stability analysis in an axisymmetric toroidal plasma such as a tokamak. In the present formulation, an eigenvalue problem is posed for the 2-D Newcomb equation, where the weight function (the kinetic energy integral) and the boundary conditions at rational surfaces are chosen so that an eigenfunction correctly behaves as the linear combination of the small solution and the analytical solutions around each of the rational surfaces. Thus, the difficulty on solving the 2-D Newcomb equation has been resolved. By using the MARG2D code, the ideal MHD marginally stable state can be identified for a 2-D toroidal plasma. The code is indispensable on computing the outer-region matching data necessary for the resistive MHD stability analysis. Benchmark with ERATOJ, an ideal MHD stability code, has been carried out and the MARG2D code demonstrates that it indeed identifies both stable and marginally stable states against ideal MHD motion. (author)

  4. SOME PROBLEMS ON JUMP CONDITIONS OF SHOCK WAVES IN 3-DIMENSIONAL SOLIDS

    Institute of Scientific and Technical Information of China (English)

    LI Yong-chi; YAO Lei; HU Xiu-zhang; CAO Jie-dong; DONG Jie

    2006-01-01

    Based on the general conservation laws in continuum mechanics, the Eulerian and Lagrangian descriptions of the jump conditions of shock waves in 3-dimensional solids were presented respectively. The implication of the jump conditions and their relations between each other, particularly the relation between the mass conservation and the displacement continuity, were discussed. Meanwhile the shock wave response curves in 3-dimensional solids, i.e. the Hugoniot curves were analysed, which provide the foundation for studying the coupling effects of shock waves in 3-dimensional solids.

  5. Conceptual problem solving in high school physics

    Science.gov (United States)

    Docktor, Jennifer L.; Strand, Natalie E.; Mestre, José P.; Ross, Brian H.

    2015-12-01

    Problem solving is a critical element of learning physics. However, traditional instruction often emphasizes the quantitative aspects of problem solving such as equations and mathematical procedures rather than qualitative analysis for selecting appropriate concepts and principles. This study describes the development and evaluation of an instructional approach called Conceptual Problem Solving (CPS) which guides students to identify principles, justify their use, and plan their solution in writing before solving a problem. The CPS approach was implemented by high school physics teachers at three schools for major theorems and conservation laws in mechanics and CPS-taught classes were compared to control classes taught using traditional problem solving methods. Information about the teachers' implementation of the approach was gathered from classroom observations and interviews, and the effectiveness of the approach was evaluated from a series of written assessments. Results indicated that teachers found CPS easy to integrate into their curricula, students engaged in classroom discussions and produced problem solutions of a higher quality than before, and students scored higher on conceptual and problem solving measures.

  6. Conceptual problem solving in high school physics

    Directory of Open Access Journals (Sweden)

    Jennifer L. Docktor

    2015-09-01

    Full Text Available Problem solving is a critical element of learning physics. However, traditional instruction often emphasizes the quantitative aspects of problem solving such as equations and mathematical procedures rather than qualitative analysis for selecting appropriate concepts and principles. This study describes the development and evaluation of an instructional approach called Conceptual Problem Solving (CPS which guides students to identify principles, justify their use, and plan their solution in writing before solving a problem. The CPS approach was implemented by high school physics teachers at three schools for major theorems and conservation laws in mechanics and CPS-taught classes were compared to control classes taught using traditional problem solving methods. Information about the teachers’ implementation of the approach was gathered from classroom observations and interviews, and the effectiveness of the approach was evaluated from a series of written assessments. Results indicated that teachers found CPS easy to integrate into their curricula, students engaged in classroom discussions and produced problem solutions of a higher quality than before, and students scored higher on conceptual and problem solving measures.

  7. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  8. Improved non-dimensional dynamic influence function method for vibration analysis of arbitrarily shaped plates with clamped edges

    Directory of Open Access Journals (Sweden)

    Sang-Wook Kang

    2016-03-01

    Full Text Available A new formulation for the non-dimensional dynamic influence function method, which was developed by the authors, is proposed to efficiently extract eigenvalues and mode shapes of clamped plates with arbitrary shapes. Compared with the finite element and boundary element methods, the non-dimensional dynamic influence function method yields highly accurate solutions in eigenvalue analysis problems of plates and membranes including acoustic cavities. However, the non-dimensional dynamic influence function method requires the uneconomic procedure of calculating the singularity of a system matrix in the frequency range of interest for extracting eigenvalues because it produces a non-algebraic eigenvalue problem. This article describes a new approach that reduces the problem of free vibrations of clamped plates to an algebraic eigenvalue problem, the solution of which is straightforward. The validity and efficiency of the proposed method are illustrated through several numerical examples.

  9. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    Science.gov (United States)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete

  10. The role of three-dimensional high-definition laparoscopic surgery for gynaecology.

    Science.gov (United States)

    Usta, Taner A; Gundogdu, Elif C

    2015-08-01

    This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.

  11. The simulation of a two-dimensional (2D) transport problem in a rectangular region with Lattice Boltzmann method with two-relaxation-time

    Science.gov (United States)

    Sugiyanto, S.; Hardyanto, W.; Marwoto, P.

    2018-03-01

    Transport phenomena are found in many problems in many engineering and industrial sectors. We analyzed a Lattice Boltzmann method with Two-Relaxation Time (LTRT) collision operators for simulation of pollutant moving through the medium as a two-dimensional (2D) transport problem in a rectangular region model. This model consists of a 2D rectangular region with 54 length (x), 27 width (y), and it has isotropic homogeneous medium. Initially, the concentration is zero and is distributed evenly throughout the region of interest. A concentration of 1 is maintained at 9 < y < 18, whereas the concentration of zero is maintained at 0 < y < 9 and 18 < y < 27. A specific discharge (Darcy velocity) of 1.006 is assumed. A diffusion coefficient of 0.8333 is distributed uniformly with a uniform porosity of 0.35. A computer program is written in MATLAB to compute the concentration of pollutant at any specified place and time. The program shows that LTRT solution with quadratic equilibrium distribution functions (EDFs) and relaxation time τa=1.0 are in good agreement result with other numerical solutions methods such as 3DLEWASTE (Hybrid Three-dimensional Lagrangian-Eulerian Finite Element Model of Waste Transport Through Saturated-Unsaturated Media) obtained by Yeh and 3DFEMWATER-LHS (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media with Latin Hypercube Sampling) obtained by Hardyanto.

  12. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  13. High dimensional biological data retrieval optimization with NoSQL technology

    Science.gov (United States)

    2014-01-01

    Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data

  14. High dimensional biological data retrieval optimization with NoSQL technology.

    Science.gov (United States)

    Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike

    2014-01-01

    High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating

  15. Non-linear instability analysis of the two-dimensional Navier-Stokes equation: The Taylor-Green vortex problem

    Science.gov (United States)

    Sengupta, Tapan K.; Sharma, Nidhi; Sengupta, Aditi

    2018-05-01

    An enstrophy-based non-linear instability analysis of the Navier-Stokes equation for two-dimensional (2D) flows is presented here, using the Taylor-Green vortex (TGV) problem as an example. This problem admits a time-dependent analytical solution as the base flow, whose instability is traced here. The numerical study of the evolution of the Taylor-Green vortices shows that the flow becomes turbulent, but an explanation for this transition has not been advanced so far. The deviation of the numerical solution from the analytical solution is studied here using a high accuracy compact scheme on a non-uniform grid (NUC6), with the fourth-order Runge-Kutta method. The stream function-vorticity (ψ, ω) formulation of the governing equations is solved here in a periodic square domain with four vortices at t = 0. Simulations performed at different Reynolds numbers reveal that numerical errors in computations induce a breakdown of symmetry and simultaneous fragmentation of vortices. It is shown that the actual physical instability is triggered by the growth of disturbances and is explained by the evolution of disturbance mechanical energy and enstrophy. The disturbance evolution equations have been traced by looking at (a) disturbance mechanical energy of the Navier-Stokes equation, as described in the work of Sengupta et al., "Vortex-induced instability of an incompressible wall-bounded shear layer," J. Fluid Mech. 493, 277-286 (2003), and (b) the creation of rotationality via the enstrophy transport equation in the work of Sengupta et al., "Diffusion in inhomogeneous flows: Unique equilibrium state in an internal flow," Comput. Fluids 88, 440-451 (2013).

  16. The bane of low-dimensionality clustering

    DEFF Research Database (Denmark)

    Cohen-Addad, Vincent; de Mesmay, Arnaud; Rotenberg, Eva

    2018-01-01

    geometric problems such as the traveling salesman problem, or computing an independent set of unit spheres. While these problems benefit from the so-called (limited) blessing of dimensionality, as they can be solved in time nO(k1--1/d) or 2

  17. A highly simplified 3D BWR benchmark problem

    International Nuclear Information System (INIS)

    Douglass, Steven; Rahnema, Farzad

    2010-01-01

    The resurgent interest in reactor development associated with the nuclear renaissance has paralleled significant advancements in computer technology, and allowed for unprecedented computational power to be applied to the numerical solution of neutron transport problems. The current generation of core-level solvers relies on a variety of approximate methods (e.g. nodal diffusion theory, spatial homogenization) to efficiently solve reactor problems with limited computer power; however, in recent years, the increased availability of high-performance computer systems has created an interest in the development of new methods and codes (deterministic and Monte Carlo) to directly solve whole-core reactor problems with full heterogeneity (lattice and core level). This paper presents the development of a highly simplified heterogeneous 3D benchmark problem with physics characteristic of boiling water reactors. The aim of this work is to provide a problem for developers to use to validate new whole-core methods and codes which take advantage of the advanced computational capabilities that are now available. Additionally, eigenvalues and an overview of the pin fission density distribution are provided for the benefit of the reader. (author)

  18. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  19. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    Science.gov (United States)

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  20. [Extraction of buildings three-dimensional information from high-resolution satellite imagery based on Barista software].

    Science.gov (United States)

    Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi

    2010-05-01

    The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.

  1. The Radiation Problem from a Vertical Hertzian Dipole Antenna above Flat and Lossy Ground: Novel Formulation in the Spectral Domain with Closed-Form Analytical Solution in the High Frequency Regime

    Directory of Open Access Journals (Sweden)

    K. Ioannidi

    2014-01-01

    Full Text Available We consider the problem of radiation from a vertical short (Hertzian dipole above flat lossy ground, which represents the well-known “Sommerfeld radiation problem” in the literature. The problem is formulated in a novel spectral domain approach, and by inverse three-dimensional Fourier transformation the expressions for the received electric and magnetic (EM field in the physical space are derived as one-dimensional integrals over the radial component of wavevector, in cylindrical coordinates. This formulation appears to have inherent advantages over the classical formulation by Sommerfeld, performed in the spatial domain, since it avoids the use of the so-called Hertz potential and its subsequent differentiation for the calculation of the received EM field. Subsequent use of the stationary phase method in the high frequency regime yields closed-form analytical solutions for the received EM field vectors, which coincide with the corresponding reflected EM field originating from the image point. In this way, we conclude that the so-called “space wave” in the literature represents the total solution of the Sommerfeld problem in the high frequency regime, in which case the surface wave can be ignored. Finally, numerical results are presented, in comparison with corresponding numerical results based on Norton’s solution of the problem.

  2. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Tétreault, Nicolas

    2011-11-09

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  3. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Té treault, Nicolas; Arsenault, É ric; Heiniger, Leo-Philipp; Soheilnia, Navid; Brillet, Jé ré mie; Moehl, Thomas; Zakeeruddin, Shaik; Ozin, Geoffrey A.; Grä tzel, Michael

    2011-01-01

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  4. High-resolution three-dimensional mapping of semiconductor dopant potentials

    DEFF Research Database (Denmark)

    Twitchett, AC; Yates, TJV; Newcomb, SB

    2007-01-01

    Semiconductor device structures are becoming increasingly three-dimensional at the nanometer scale. A key issue that must be addressed to enable future device development is the three-dimensional mapping of dopant distributions, ideally under "working conditions". Here we demonstrate how a combin......Semiconductor device structures are becoming increasingly three-dimensional at the nanometer scale. A key issue that must be addressed to enable future device development is the three-dimensional mapping of dopant distributions, ideally under "working conditions". Here we demonstrate how...... a combination of electron holography and electron tomography can be used to determine quantitatively the three-dimensional electrostatic potential in an electrically biased semiconductor device with nanometer spatial resolution....

  5. The boundary element method for the solution of the multidimensional inverse heat conduction problem

    International Nuclear Information System (INIS)

    Lagier, Guy-Laurent

    1999-01-01

    This work focuses on the solution of the inverse heat conduction problem (IHCP), which consists in the determination of boundary conditions from a given set of internal temperature measurements. This problem is difficult to solve due to its ill-posedness and high sensitivity to measurement error. As a consequence, numerical regularization procedures are required to solve this problem. However, most of these methods depend on the dimension and the nature, stationary or transient, of the problem. Furthermore, these methods introduce parameters, called hyper-parameters, which have to be chosen optimally, but can not be determined a priori. So, a new general method is proposed for solving the IHCP. This method is based on a Boundary Element Method formulation, and the use of the Singular Values Decomposition as a regularization procedure. Thanks to this method, it's possible to identify and eliminate the directions of the solution where the measurement error plays the major role. This algorithm is first validated on two-dimensional stationary and one-dimensional transient problems. Some criteria are presented in order to choose the hyper-parameters. Then, the methodology is applied to two-dimensional and three-dimensional, theoretical or experimental, problems. The results are compared with those obtained by a standard method and show the accuracy of the method, its generality, and the validity of the proposed criteria. (author) [fr

  6. Individual-based models for adaptive diversification in high-dimensional phenotype spaces.

    Science.gov (United States)

    Ispolatov, Iaroslav; Madhok, Vaibhav; Doebeli, Michael

    2016-02-07

    Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can be introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  8. Solution to Two-Dimensional Steady Inverse Heat Transfer Problems with Interior Heat Source Based on the Conjugate Gradient Method

    Directory of Open Access Journals (Sweden)

    Shoubin Wang

    2017-01-01

    Full Text Available The compound variable inverse problem which comprises boundary temperature distribution and surface convective heat conduction coefficient of two-dimensional steady heat transfer system with inner heat source is studied in this paper applying the conjugate gradient method. The introduction of complex variable to solve the gradient matrix of the objective function obtains more precise inversion results. This paper applies boundary element method to solve the temperature calculation of discrete points in forward problems. The factors of measuring error and the number of measuring points zero error which impact the measurement result are discussed and compared with L-MM method in inverse problems. Instance calculation and analysis prove that the method applied in this paper still has good effectiveness and accuracy even if measurement error exists and the boundary measurement points’ number is reduced. The comparison indicates that the influence of error on the inversion solution can be minimized effectively using this method.

  9. A Multi-layer Hybrid Framework for Dimensional Emotion Classification

    NARCIS (Netherlands)

    Nicolaou, Mihalis A.; Gunes, Hatice; Pantic, Maja

    2011-01-01

    This paper investigates dimensional emotion prediction and classification from naturalistic facial expressions. Similarly to many pattern recognition problems, dimensional emotion classification requires generating multi-dimensional outputs. To date, classification for valence and arousal dimensions

  10. Three-dimensionality of field-induced magnetism in a high-temperature superconductor

    DEFF Research Database (Denmark)

    Lake, B.; Lefmann, K.; Christensen, N.B.

    2005-01-01

    Many physical properties of high-temperature superconductors are two-dimensional phenomena derived from their square-planar CuO(2) building blocks. This is especially true of the magnetism from the copper ions. As mobile charge carriers enter the CuO(2) layers, the antiferromagnetism of the parent...

  11. Pricing and hedging high-dimensional American options : an irregular grid approach

    NARCIS (Netherlands)

    Berridge, S.; Schumacher, H.

    2002-01-01

    We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  12. The Topology Optimization of Three-dimensional Cooling Fins by the Internal Element Connectivity Parameterization Method

    International Nuclear Information System (INIS)

    Yoo, Sung Min; Kim, Yoon Young

    2007-01-01

    This work is concerned with the topology optimization of three-dimensional cooling fins or heat sinks. Motivated by earlier success of the Internal Element Connectivity Method (I-ECP) method in two dimensional problems, the extension of I-ECP to three-dimensional problems is carried out. The main efforts were made to maintain the numerical trouble-free characteristics of I-ECP for full three-dimensional problems; a serious numerical problem appearing in thermal topology optimization is erroneous temperature undershooting. The effectiveness of the present implementation was checked through the design optimization of three-dimensional fins

  13. Direct and inverse problems of studying the properties of multilayer nanostructures based on a two-dimensional model of X-ray reflection and scattering

    Science.gov (United States)

    Khachaturov, R. V.

    2014-06-01

    A mathematical model of X-ray reflection and scattering by multilayered nanostructures in the quasi-optical approximation is proposed. X-ray propagation and the electric field distribution inside the multilayered structure are considered with allowance for refraction, which is taken into account via the second derivative with respect to the depth of the structure. This model is used to demonstrate the possibility of solving inverse problems in order to determine the characteristics of irregularities not only over the depth (as in the one-dimensional problem) but also over the length of the structure. An approximate combinatorial method for system decomposition and composition is proposed for solving the inverse problems.

  14. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  15. Metallic and highly conducting two-dimensional atomic arrays of sulfur enabled by molybdenum disulfide nanotemplate

    Science.gov (United States)

    Zhu, Shuze; Geng, Xiumei; Han, Yang; Benamara, Mourad; Chen, Liao; Li, Jingxiao; Bilgin, Ismail; Zhu, Hongli

    2017-10-01

    Element sulfur in nature is an insulating solid. While it has been tested that one-dimensional sulfur chain is metallic and conducting, the investigation on two-dimensional sulfur remains elusive. We report that molybdenum disulfide layers are able to serve as the nanotemplate to facilitate the formation of two-dimensional sulfur. Density functional theory calculations suggest that confined in-between layers of molybdenum disulfide, sulfur atoms are able to form two-dimensional triangular arrays that are highly metallic. As a result, these arrays contribute to the high conductivity and metallic phase of the hybrid structures of molybdenum disulfide layers and two-dimensional sulfur arrays. The experimentally measured conductivity of such hybrid structures reaches up to 223 S/m. Multiple experimental results, including X-ray photoelectron spectroscopy (XPS), transition electron microscope (TEM), selected area electron diffraction (SAED), agree with the computational insights. Due to the excellent conductivity, the current density is linearly proportional to the scan rate until 30,000 mV s-1 without the attendance of conductive additives. Using such hybrid structures as electrode, the two-electrode supercapacitor cells yield a power density of 106 Wh kg-1 and energy density 47.5 Wh kg-1 in ionic liquid electrolytes. Our findings offer new insights into using two-dimensional materials and their Van der Waals heterostructures as nanotemplates to pattern foreign atoms for unprecedented material properties.

  16. CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Malgorzata Nowicka

    2017-05-01

    Full Text Available High dimensional mass and flow cytometry (HDCyto experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots, reporting of clustering results (dimensionality reduction, heatmaps with dendrograms and differential analyses (e.g. plots of aggregated signals.

  17. Three-dimensional problems of the hydrodynamic interaction between bodies in a viscous fluid in the vicinity of their contact

    Czech Academy of Sciences Publication Activity Database

    Petrov, A. G.; Kharlamov, Alexander A.

    2013-01-01

    Roč. 48, č. 5 (2013), s. 577-587 ISSN 0015-4628 R&D Projects: GA ČR(CZ) GA103/09/2066 Grant - others:Development of the Scientific Potential of the Higher Schoo(RU) 2.1.2/3604; Russian Foundation for Basic Research (RU) 11- 01-005355 Institutional support: RVO:67985874 Keywords : lubrication layer theory * viscous and inviscid fluids * thin layer * vicinity of a contact * three-dimensional problems Subject RIV: BK - Fluid Dynamics Impact factor: 0.320, year: 2013

  18. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    Science.gov (United States)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  19. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    Directory of Open Access Journals (Sweden)

    Thenmozhi Srinivasan

    2015-01-01

    Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.

  20. On the partition function of d+1 dimensional kink-bearing systems

    International Nuclear Information System (INIS)

    Radosz, A.; Salejda, W.

    1987-01-01

    It is suggested that the problem of finding a partition function of d+1 dimensional kink-bearing system in the classical approximation may be formulated as an eigenvalue problem of an appropriate d dimensional quantum

  1. On the use, by Einstein, of the principle of dimensional homogeneity, in three problems of the physics of solids

    Directory of Open Access Journals (Sweden)

    FERNANDO L. LOBO B. CARNEIRO

    2000-12-01

    Full Text Available Einstein, in 1911, published an article on the application of the principle of dimensional homogeneity to three problems of the physics of solids: the characteristic frequency of the atomic nets of crystalline solids as a function of their moduli of compressibility or of their melting points, and the thermal conductivity of crystalline insulators. Recognizing that the physical dimensions of temperature are not the same as those of energy and heat, Einstein had recourse to the artifice of replace that physical parameter by its product by the Boltzmann constant, so obtaining correct results. But nowadays, with the new basic quantities "Thermodynamic Temperature theta (unit- Kelvin'', "Electric Current I (unit Ampère'' and "Amount of Substance MOL (unit-mole'', incorporated to the SI International System of Units, in 1960 and 1971, the same results are obtained in a more direct and coherent way. At the time of Einstein's article only three basic physical quantities were considered - length L, mass M, and time T. He ignored the pi theorem of dimensional analysis diffused by Buckingham three years later, and obtained the "pi numbers'' by trial and error. In the present paper is presented a revisitation of the article of Einstein, conducted by the modern methodology of dimensional analysis and theory of physical similitude.

  2. Some problems of high-energy elementary particle physics

    International Nuclear Information System (INIS)

    Isaev, P.S.

    1995-01-01

    The problems of high-energy elementary particle physics are discussed. It is pointed out that the modern theory of elementary-particle physics has no solutions of some large physical problems: origin of the mass, electric charge, identity of particle masses, change of the mass of elementary particles in time and others. 7 refs

  3. Reduced-Contrast Approximations for High-Contrast Multiscale Flow Problems

    KAUST Repository

    Chung, Eric T.; Efendiev, Yalchin

    2010-01-01

    In this paper, we study multiscale methods for high-contrast elliptic problems where the media properties change dramatically. The disparity in the media properties (also referred to as high contrast in the paper) introduces an additional scale that needs to be resolved in multiscale simulations. First, we present a construction that uses an integral equation to represent the highcontrast component of the solution. This representation involves solving an integral equation along the interface where the coefficients are discontinuous. The integral representation suggests some multiscale approaches that are discussed in the paper. One of these approaches entails the use of interface functions in addition to multiscale basis functions representing the heterogeneities without high contrast. In this paper, we propose an approximation for the solution of the integral equation using the interface problems in reduced-contrast media. Reduced-contrast media are obtained by lowering the variance of the coefficients. We also propose a similar approach for the solution of the elliptic equation without using an integral representation. This approach is simpler to use in the computations because it does not involve setting up integral equations. The main idea of this approach is to approximate the solution of the high-contrast problem by the solutions of the problems formulated in reduced-contrast media. In this approach, a rapidly converging sequence is proposed where only problems with lower contrast are solved. It was shown that this sequence possesses the convergence rate that is inversely proportional to the reduced contrast. This approximation allows choosing the reduced-contrast problem based on the coarse-mesh size as discussed in this paper. We present a simple application of this approach to homogenization of elliptic equations with high-contrast coefficients. The presented approaches are limited to the cases where there are sharp changes in the contrast (i.e., the high

  4. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    Science.gov (United States)

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    Directory of Open Access Journals (Sweden)

    Changsheng Zhu

    2018-03-01

    Full Text Available In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  6. The searchlight problem for neutrons in a semi-infinite medium

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1993-01-01

    The solution of the Search Light Problem for monoenergetic neutrons in a semi-infinite medium with isotropic scattering illuminated at the free surface is obtained by several methods at various planes within the medium. The sources considered are a normally-incident pencil beam and an isotropic point source. The analytic solution is effected by a recently developed numerical inversion technique applied to the Fourier-Bessel transform. This transform inversion results from the solution method of Rybicki, where the two-dimensional problem is solved by casting it as a variant of a one-dimensional problem. The numerical inversion process results in a highly accurate solution. Comparisons of the analytic solution with results from Monte Carlo (MCNP) and discrete ordinates transport (DORT) codes show excellent agreement. These comparisons, which are free of any associated data or cross section set dependencies, provide significant evidence of the proper operation of both the transport codes tested

  7. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    OpenAIRE

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data)...

  8. High-velocity two-phase flow two-dimensional modeling

    International Nuclear Information System (INIS)

    Mathes, R.; Alemany, A.; Thilbault, J.P.

    1995-01-01

    The two-phase flow in the nozzle of a LMMHD (liquid metal magnetohydrodynamic) converter has been studied numerically and experimentally. A two-dimensional model for two-phase flow has been developed including the viscous terms (dragging and turbulence) and the interfacial mass, momentum and energy transfer between the phases. The numerical results were obtained by a finite volume method based on the SIMPLE algorithm. They have been verified by an experimental facility using air-water as a simulation pair and a phase Doppler particle analyzer for velocity and droplet size measurement. The numerical simulation of a lithium-cesium high-temperature pair showed that a nearly homogeneous and isothermal expansion of the two phases is possible with small pressure losses and high kinetic efficiencies. In the throat region a careful profiling is necessary to reduce the inertial effects on the liquid velocity field

  9. Dissecting high-dimensional phenotypes with bayesian sparse factor analysis of genetic covariance matrices.

    Science.gov (United States)

    Runcie, Daniel E; Mukherjee, Sayan

    2013-07-01

    Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.

  10. On-chip generation of high-dimensional entangled quantum states and their coherent control.

    Science.gov (United States)

    Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto

    2017-06-28

    Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.

  11. On some three-dimensional problems of piezoelectricity | Saha ...

    African Journals Online (AJOL)

    The problem of an elliptical crack embedded in an unbounded transversely isotropic piezoelectric medium and subjected to remote normal loading is considered first. The integral equation method developed by Roy and his coworkers has been applied suitably with proper modifications to solve the problem. The method ...

  12. TSAR: a program for automatic resonance assignment using 2D cross-sections of high dimensionality, high-resolution spectra

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor [University of Warsaw, Faculty of Chemistry (Poland); Billeter, Martin, E-mail: martin.billeter@chem.gu.se [University of Gothenburg, Biophysics Group, Department of Chemistry and Molecular Biology (Sweden)

    2012-09-15

    While NMR studies of proteins typically aim at structure, dynamics or interactions, resonance assignments represent in almost all cases the initial step of the analysis. With increasing complexity of the NMR spectra, for example due to decreasing extent of ordered structure, this task often becomes both difficult and time-consuming, and the recording of high-dimensional data with high-resolution may be essential. Random sampling of the evolution time space, combined with sparse multidimensional Fourier transform (SMFT), allows for efficient recording of very high dimensional spectra ({>=}4 dimensions) while maintaining high resolution. However, the nature of this data demands for automation of the assignment process. Here we present the program TSAR (Tool for SMFT-based Assignment of Resonances), which exploits all advantages of SMFT input. Moreover, its flexibility allows to process data from any type of experiments that provide sequential connectivities. The algorithm was tested on several protein samples, including a disordered 81-residue fragment of the {delta} subunit of RNA polymerase from Bacillus subtilis containing various repetitive sequences. For our test examples, TSAR achieves a high percentage of assigned residues without any erroneous assignments.

  13. Hull properties in location problems

    DEFF Research Database (Denmark)

    Juel, Henrik; Love, Robert F.

    1983-01-01

    Some properties of the solution set for single and multifacility continuous location problems with lp distances are given. A set reduction algorithm is developed for problems in k-dimensional space having rectangular distances....

  14. Assessing the detectability of antioxidants in two-dimensional high-performance liquid chromatography.

    Science.gov (United States)

    Bassanese, Danielle N; Conlan, Xavier A; Barnett, Neil W; Stevenson, Paul G

    2015-05-01

    This paper explores the analytical figures of merit of two-dimensional high-performance liquid chromatography for the separation of antioxidant standards. The cumulative two-dimensional high-performance liquid chromatography peak area was calculated for 11 antioxidants by two different methods--the areas reported by the control software and by fitting the data with a Gaussian model; these methods were evaluated for precision and sensitivity. Both methods demonstrated excellent precision in regards to retention time in the second dimension (%RSD below 1.16%) and cumulative second dimension peak area (%RSD below 3.73% from the instrument software and 5.87% for the Gaussian method). Combining areas reported by the high-performance liquid chromatographic control software displayed superior limits of detection, in the order of 1 × 10(-6) M, almost an order of magnitude lower than the Gaussian method for some analytes. The introduction of the countergradient eliminated the strong solvent mismatch between dimensions, leading to a much improved peak shape and better detection limits for quantification. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. The inverse problem for the one-dimensional Schroedinger equation with an energy-dependent potential. II

    International Nuclear Information System (INIS)

    Jaulent, M.; Jean, C.

    1976-01-01

    The one-dimensional Schroedinger equation y + ''+ ) 7k 2 -V + (k,x){y + =0, x belonging to R, was previously considered when the potential V + (k,x) depends on the energy k 2 in the following way: V + (k,x)=U(x)+2kQ(x), (U(x), Q(x)) belonging to a large class of pairs of real potentials admitting no bound state). The two systems of differential and integral equations then introduced are solved. Then, investigating the inverse scattering problem it is found that a necessary and sufficient condition for one of the functions S + (k) and Ssub(-1)sup(+)(k) to be the scattering matrix associated with a pair (U(x), Q(x)) is that S + (k) (or equivalently Ssub(-1)sup(+)(k) belongs to the class S introduced. This pair is the only one admitting this function as its scattering matrix. Investigating the inverse reflection problem, it is found that a necessary and sufficient condition for a function S 21 + (k) to be the reflection coefficient to the right associated with a pair (U(x), Q(x)) is that S 21 + (k) belongs to the class R introduced. This pair is the only one admitting this function as

  16. Three-dimensional tokamak equilibria and stellarators with two-dimensional magnetic symmetry

    International Nuclear Information System (INIS)

    Garabedian, P.R.

    1997-01-01

    Three-dimensional computer codes have been developed to simulate equilibrium, stability and transport in tokamaks and stellarators. Bifurcated solutions of the tokamak problem suggest that three-dimensional effects may be more important than has generally been thought. Extensive calculations have led to the discovery of a stellarator configuration with just two field periods and with aspect ratio 3.2 that has a magnetic field spectrum B mn with toroidal symmetry. Numerical studies of equilibrium, stability and transport for this new device, called the Modular Helias-like Heliac 2 (MHH2), will be presented. (author)

  17. The validation and assessment of machine learning: a game of prediction from high-dimensional data

    DEFF Research Database (Denmark)

    Pers, Tune Hannes; Albrechtsen, A; Holst, C

    2009-01-01

    In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....

  18. Quantum secret sharing based on modulated high-dimensional time-bin entanglement

    International Nuclear Information System (INIS)

    Takesue, Hiroki; Inoue, Kyo

    2006-01-01

    We propose a scheme for quantum secret sharing (QSS) that uses a modulated high-dimensional time-bin entanglement. By modulating the relative phase randomly by {0,π}, a sender with the entanglement source can randomly change the sign of the correlation of the measurement outcomes obtained by two distant recipients. The two recipients must cooperate if they are to obtain the sign of the correlation, which is used as a secret key. We show that our scheme is secure against intercept-and-resend (IR) and beam splitting attacks by an outside eavesdropper thanks to the nonorthogonality of high-dimensional time-bin entangled states. We also show that a cheating attempt based on an IR attack by one of the recipients can be detected by changing the dimension of the time-bin entanglement randomly and inserting two 'vacant' slots between the packets. Then, cheating attempts can be detected by monitoring the count rate in the vacant slots. The proposed scheme has better experimental feasibility than previously proposed entanglement-based QSS schemes

  19. AucPR: an AUC-based approach using penalized regression for disease prediction with high-dimensional omics data.

    Science.gov (United States)

    Yu, Wenbao; Park, Taesung

    2014-01-01

    It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.

  20. Three-dimensional graphene/polyaniline composite material for high-performance supercapacitor applications

    International Nuclear Information System (INIS)

    Liu, Huili; Wang, Yi; Gou, Xinglong; Qi, Tao; Yang, Jun; Ding, Yulong

    2013-01-01

    Highlights: ► A novel 3D graphene showed high specific surface area and large mesopore volume. ► Aniline monomer was polymerized in the presence of 3D graphene at room temperature. ► The supercapacitive properties were studied by CV and charge–discharge tests. ► The composite show a high gravimetric capacitance and good cyclic stability. ► The 3D graphene/polyaniline has never been report before our work. -- Abstract: A novel three-dimensional (3D) graphene/polyaniline nanocomposite material which is synthesized using in situ polymerization of aniline monomer on the graphene surface is reported as an electrode for supercapacitors. The morphology and structure of the material are characterized by scanning electron microscopy (SEM), transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD). The electrochemical properties of the resulting materials are systematically studied using cyclic voltammetry (CV) and constant current charge–discharge tests. A high gravimetric capacitance of 463 F g −1 at a scan rate of 1 mV s −1 is obtained by means of CVs with 3 mol L −1 KOH as the electrolyte. In addition, the composite material shows only 9.4% capacity loss after 500 cycles, indicating better cyclic stability for supercapacitor applications. The high specific surface area, large mesopore volume and three-dimensional nanoporous structure of 3D graphene could contribute to the high specific capacitance and good cyclic life

  1. Three-dimensional transport theory: An analytical solution of an internal beam searchlight problem-I

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2009-01-01

    We describe a number of methods for obtaining analytical solutions and numerical results for three-dimensional one-speed neutron transport problems in a half-space containing a variety of source shapes which emit neutrons mono-directionally. For example, we consider an off-centre point source, a ring source and a disk source, or any combination of these, and calculate the surface scalar flux as a function of the radial and angular co-ordinates. Fourier transforms in the transverse directions are used and a Laplace transform in the axial direction. This enables the Wiener-Hopf method to be employed, followed by an inverse Fourier-Hankel transform. Some additional transformations are introduced which enable the inverse Hankel transforms involving Bessel functions to be evaluated numerically more efficiently. A hybrid diffusion theory method is also described which is shown to be a useful guide to the general behaviour of the solutions of the transport equation.

  2. An algorithm for determining the K-best solutions of the one-dimensional Knapsack problem

    Directory of Open Access Journals (Sweden)

    Horacio Hideki Yanasse

    2000-06-01

    Full Text Available In this work we present an enumerative scheme for determining the K-best solutions (K > 1 of the one dimensional knapsack problem. If n is the total number of different items and b is the knapsack's capacity, the computational complexity of the proposed scheme is bounded by O(Knb with memory requirements bounded by O(nb. The algorithm was implemented in a workstation and computational tests for varying values of the parameters were performed.Neste trabalho apresenta-se um esquema enumerativo para se determinar as K-melhores (K > 1 soluções para o problema da mochila unidimensional. Se n é o número total de itens diferentes e b é a capacidade da mochila, a complexidade computacional do esquema proposto é limitado por O(Knb. O algoritmo foi implementado em uma estação de trabalho e testes computacionais foram realizados variando-se diferentes parâmetros do problema.

  3. Mesoporous Three-Dimensional Graphene Networks for Highly Efficient Solar Desalination under 1 sun Illumination.

    Science.gov (United States)

    Kim, Kwanghyun; Yu, Sunyoung; An, Cheolwon; Kim, Sung-Wook; Jang, Ji-Hyun

    2018-05-09

    Solar desalination via thermal evaporation of seawater is one of the most promising technologies for addressing the serious problem of global water scarcity because it does not require additional supporting energy other than infinite solar energy for generating clean water. However, low efficiency and a large amount of heat loss are considered critical limitations of solar desalination technology. The combination of mesoporous three-dimensional graphene networks (3DGNs) with a high solar absorption property and water-transporting wood pieces with a thermal insulation property has exhibited greatly enhanced solar-to-vapor conversion efficiency. 3DGN deposited on a wood piece provides an outstanding value of solar-to-vapor conversion efficiency, about 91.8%, under 1 sun illumination and excellent desalination efficiency of 5 orders salinity decrement. The mass-producible 3DGN enriched with many mesopores efficiently releases the vapors from an enormous area of the surface by heat localization on the top surface of the wood piece. Because the efficient solar desalination device made by 3DGN on the wood piece is highly scalable and inexpensive, it could serve as one of the main sources for the worldwide supply of purified water achieved via earth-abundant materials without an extra supporting energy source.

  4. Simulating three-dimensional nonthermal high-energy photon emission in colliding-wind binaries

    Energy Technology Data Exchange (ETDEWEB)

    Reitberger, K.; Kissmann, R.; Reimer, A.; Reimer, O., E-mail: klaus.reitberger@uibk.ac.at [Institut für Astro- und Teilchenphysik and Institut für Theoretische Physik, Leopold-Franzens-Universität Innsbruck, A-6020 Innsbruck (Austria)

    2014-07-01

    Massive stars in binary systems have long been regarded as potential sources of high-energy γ rays. The emission is principally thought to arise in the region where the stellar winds collide and accelerate relativistic particles which subsequently emit γ rays. On the basis of a three-dimensional distribution function of high-energy particles in the wind collision region—as obtained by a numerical hydrodynamics and particle transport model—we present the computation of the three-dimensional nonthermal photon emission for a given line of sight. Anisotropic inverse Compton emission is modeled using the target radiation field of both stars. Photons from relativistic bremsstrahlung and neutral pion decay are computed on the basis of local wind plasma densities. We also consider photon-photon opacity effects due to the dense radiation fields of the stars. Results are shown for different stellar separations of a given binary system comprising of a B star and a Wolf-Rayet star. The influence of orbital orientation with respect to the line of sight is also studied by using different orbital viewing angles. For the chosen electron-proton injection ratio of 10{sup –2}, we present the ensuing photon emission in terms of two-dimensional projections maps, spectral energy distributions, and integrated photon flux values in various energy bands. Here, we find a transition from hadron-dominated to lepton-dominated high-energy emission with increasing stellar separations. In addition, we confirm findings from previous analytic modeling that the spectral energy distribution varies significantly with orbital orientation.

  5. Problems in the design and specification of containers for vitrified high-level liquid waste

    International Nuclear Information System (INIS)

    Corbet, A.D.W.; Hall, G.G.; Spiller, G.T.

    1976-01-01

    In the United Kingdom the growing problem of ensuring the safe storage of high-level liquid waste over long time scales has led to a policy for implementing solidification. A brief description is given of the HARVEST vitrification process, which is essentially a scaled-up version of the FINGAL process with increased throughput. The functional requirements of the container are considered. It must be made of a material which can be fabricated to a high standard. Diameters up to 600 mm for right circular cylindrical containers and 1200 mm for annular containers are contemplated. Computer aids for axisymmetric and three-dimensional heat transfer and stress analysis are identified. One example is given of the thermal profile for the cylindrical container in the furnace and another example for the annular container following an accident condition. Measured values are given for high temperature oxidation, emissivity and the short-term creep strength of various alloys. Corrosion in fresh water and sea water over long time periods and leaching of partially exposed solid waste are discussed and a conceptual package for sea bed disposal is described. The relative merits of the different methods of manufacture are pointed out and the paper concludes that HK-40 or better INCOLOY alloy 800L are suitable materials of construction. (author)

  6. Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA

    Science.gov (United States)

    Messer, O. E. B.; Harris, J. A.; Hix, W. R.; Lentz, E. J.; Bruenn, S. W.; Mezzacappa, A.

    2018-04-01

    Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport, and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.

  7. On two-spectra inverse problems

    OpenAIRE

    Guliyev, Namig J.

    2018-01-01

    We consider a two-spectra inverse problem for the one-dimensional Schr\\"{o}dinger equation with boundary conditions containing rational Herglotz--Nevanlinna functions of the eigenvalue parameter and provide a complete solution of this problem.

  8. Zero- and two-dimensional hybrid carbon phosphors for high colorimetric purity white light-emission.

    Science.gov (United States)

    Ding, Yamei; Chang, Qing; Xiu, Fei; Chen, Yingying; Liu, Zhengdong; Ban, Chaoyi; Cheng, Shuai; Liu, Juqing; Huang, Wei

    2018-03-01

    Carbon nanomaterials are promising phosphors for white light emission. A facile single-step synthesis method has been developed to prepare zero- and two-dimensional hybrid carbon phosphors for the first time. Zero-dimensional carbon dots (C-dots) emit bright blue luminescence under 365 nm UV light and two-dimensional nanoplates improve the dispersity and film forming ability of C-dots. As a proof-of-concept application, the as-prepared hybrid carbon phosphors emit bright white luminescence in the solid state, and the phosphor-coated blue LEDs exhibit high colorimetric purity white light-emission with a color coordinate of (0.3308, 0.3312), potentially enabling the successful application of white emitting phosphors in the LED field.

  9. Numerical determination of families of three-dimensional double-symmetric periodic orbits in the restricted three-body problem. Pt. 1

    International Nuclear Information System (INIS)

    Kazantzis, P.G.

    1979-01-01

    New families of three-dimensional double-symmetric periodic orbits are determined numerically in the Sun-Jupiter case of the restricted three-body problem. These families bifurcate from the 'vertical-critical' orbits (αsub(ν) = -1, csub(ν) = 0) of the 'basic' plane families i. g 1 g 2 h, a, m and I. Further the numerical procedure employed in the determination of these families has been described and interesting results have been pointed out. Also, computer plots of the orbits of these families have been shown in conical projections. (orig.)

  10. OPERATOR-RELATED FORMULATION OF THE EIGENVALUE PROBLEM FOR THE BOUNDARY PROBLEM OF ANALYSIS OF A THREE-DIMENSIONAL STRUCTURE WITH PIECEWISE-CONSTANT PHYSICAL AND GEOMETRICAL PARAMETERS ALONGSIDE THE BASIC DIRECTION WITHIN THE FRAMEWORK OF THE DISCRETE-CON

    Directory of Open Access Journals (Sweden)

    Akimov Pavel Alekseevich

    2012-10-01

    Full Text Available The proposed paper covers the operator-related formulation of the eigenvalue problem of analysis of a three-dimensional structure that has piecewise-constant physical and geometrical parameters alongside the so-called basic direction within the framework of a discrete-continual approach (a discrete-continual finite element method, a discrete-continual variation method. Generally, discrete-continual formulations represent contemporary mathematical models that become available for computer implementation. They make it possible for a researcher to consider the boundary effects whenever particular components of the solution represent rapidly varying functions. Another feature of discrete-continual methods is the absence of any limitations imposed on lengths of structures. The three-dimensional problem of elasticity is used as the design model of a structure. In accordance with the so-called method of extended domain, the domain in question is embordered by an extended one of an arbitrary shape. At the stage of numerical implementation, relative key features of discrete-continual methods include convenient mathematical formulas, effective computational patterns and algorithms, simple data processing, etc. The authors present their formulation of the problem in question for an isotropic medium with allowance for supports restrained by elastic elements while standard boundary conditions are also taken into consideration.

  11. Stable high efficiency two-dimensional perovskite solar cells via cesium doping

    KAUST Repository

    Zhang, Xu

    2017-08-15

    Two-dimensional (2D) organic-inorganic perovskites have recently emerged as one of the most important thin-film solar cell materials owing to their excellent environmental stability. The remaining major pitfall is their relatively poor photovoltaic performance in contrast to 3D perovskites. In this work we demonstrate cesium cation (Cs) doped 2D (BA)(MA)PbI perovskite solar cells giving a power conversion efficiency (PCE) as high as 13.7%, the highest among the reported 2D devices, with excellent humidity resistance. The enhanced efficiency from 12.3% (without Cs) to 13.7% (with 5% Cs) is attributed to perfectly controlled crystal orientation, an increased grain size of the 2D planes, superior surface quality, reduced trap-state density, enhanced charge-carrier mobility and charge-transfer kinetics. Surprisingly, it is found that the Cs doping yields superior stability for the 2D perovskite solar cells when subjected to a high humidity environment without encapsulation. The device doped using 5% Cs degrades only ca. 10% after 1400 hours of exposure in 30% relative humidity (RH), and exhibits significantly improved stability under heating and high moisture environments. Our results provide an important step toward air-stable and fully printable low dimensional perovskites as a next-generation renewable energy source.

  12. Estimating the effect of a variable in a high-dimensional regression model

    DEFF Research Database (Denmark)

    Jensen, Peter Sandholt; Wurtz, Allan

    assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider  properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...

  13. Equilibrium: two-dimensional configurations

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    In Chapter 6, the problem of toroidal force balance is addressed in the simplest, nontrivial two-dimensional geometry, that of an axisymmetric torus. A derivation is presented of the Grad-Shafranov equation, the basic equation describing axisymmetric toroidal equilibrium. The solutions to equations provide a complete description of ideal MHD equilibria: radial pressure balance, toroidal force balance, equilibrium Beta limits, rotational transform, shear, magnetic wall, etc. A wide number of configurations are accurately modeled by the Grad-Shafranov equation. Among them are all types of tokamaks, the spheromak, the reversed field pinch, and toroidal multipoles. An important aspect of the analysis is the use of asymptotic expansions, with an inverse aspect ratio serving as the expansion parameter. In addition, an equation similar to the Grad-Shafranov equation, but for helically symmetric equilibria, is presented. This equation represents the leading-order description low-Beta and high-Beta stellarators, heliacs, and the Elmo bumpy torus. The solutions all correspond to infinitely long straight helices. Bending such a configuration into a torus requires a full three-dimensional calculation and is discussed in Chapter 7

  14. Quality and efficiency in high dimensional Nearest neighbor search

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2009-01-01

    Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.

  15. [Advances in the research of application of collagen in three-dimensional bioprinting].

    Science.gov (United States)

    Li, H H; Luo, P F; Sheng, J J; Liu, G C; Zhu, S H

    2016-10-20

    As a new industrial technology with characteristics of high precision and accuracy, the application of three-dimensional bioprinting technology is increasingly wide in the field of medical research. Collagen is one of the most common ingredients in tissue, and it has good biological material properties. There are many reports of using collagen as main composition of " ink" of three-dimensional bioprinting technology. However, the applied collagen is mainly from heterogeneous sources, which may cause some problems in application. Recombinant human source collagen can be obtained from microorganism fermentation by transgenic technology, but more research should be done to confirm its property. This article reviews the advances in the research of collagen and its biological application in three-dimensional bioprinting.

  16. Effects of Problem Based Economics on High School Economics Instruction

    Science.gov (United States)

    Finkelstein, Neal; Hanson, Thomas

    2011-01-01

    The primary purpose of this study is to assess student-level impacts of a problem-based instructional approach to high school economics. The curriculum approach examined here was designed to increase class participation and content knowledge for high school students who are learning economics. This study tests the effectiveness of Problem Based…

  17. High-dimensional chaos from self-sustained collisions of solitons

    Energy Technology Data Exchange (ETDEWEB)

    Yildirim, O. Ozgur, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Cavium, Inc., 600 Nickerson Rd., Marlborough, Massachusetts 01752 (United States); Ham, Donhee, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Harvard University, 33 Oxford St., Cambridge, Massachusetts 02138 (United States)

    2014-06-16

    We experimentally demonstrate chaos generation based on collisions of electrical solitons on a nonlinear transmission line. The nonlinear line creates solitons, and an amplifier connected to it provides gain to these solitons for their self-excitation and self-sustenance. Critically, the amplifier also provides a mechanism to enable and intensify collisions among solitons. These collisional interactions are of intrinsically nonlinear nature, modulating the phase and amplitude of solitons, thus causing chaos. This chaos generated by the exploitation of the nonlinear wave phenomena is inherently high-dimensional, which we also demonstrate.

  18. Preparation of three-dimensional graphene foam for high performance supercapacitors

    Directory of Open Access Journals (Sweden)

    Yunjie Ping

    2017-04-01

    Full Text Available Supercapacitor is a new type of energy-storage device, and has been attracted widely attentions. As a two dimensional (2D nanomaterials, graphene is considered to be a promising material of supercapacitor because of its excellent properties involving high electrical conductivity and large surface area. In this paper, the large-scale graphene is successfully fabricated via environmental-friendly electrochemical exfoliation of graphite, and then, the three dimensional (3D graphene foam is prepared by using nickel foam as template and FeCl3/HCl solution as etchant. Compared with the regular 2D graphene paper, the 3D graphene foam electrode shows better electrochemical performance, and exhibits the largest specific capacitance of approximately 128 F/g at the current density of 1 A/g in 6 M KOH electrolyte. It is expected that the 3D graphene foam will have a potential application in the supercapacitors.

  19. Multi-dimensional Laplace transforms and applications

    International Nuclear Information System (INIS)

    Mughrabi, T.A.

    1988-01-01

    In this dissertation we establish new theorems for computing certain types of multidimensional Laplace transform pairs from known one-dimensional Laplace transforms. The theorems are applied to the most commonly used special functions and so we obtain many two and three dimensional Laplace transform pairs. As applications, some boundary value problems involving linear partial differential equations are solved by the use of multi-dimensional Laplace transformation. Also we establish some relations between the Laplace transformation and other integral transformation in two variables

  20. Three-dimensional Modeling of Type Ia Supernova Explosions

    Science.gov (United States)

    Khokhlov, Alexei

    2001-06-01

    A deflagration explosion of a Type Ia Supernova (SNIa) is studied using three-dimensional, high-resolution, adaptive mesh refinement fluid dynamic calculations. Deflagration speed in an exploding Chandrasekhar-mass carbon-oxygen white dwarf (WD) grows exponentially, reaches approximately 30the speed of sound, and then declines due to a WD expansion. Outermost layers of the WD remain unburned. The explosion energy is comparable to that of a Type Ia supernova. The freezing of turbulent motions by expansion appears to be a crucial physical mechanism regulating the strength of a supernova explosion. In contrast to one-dimensional models, three-dimensional calculations predict the formation of Si-group elements and pockets of unburned CO in the middle and in central regions of a supernova ejecta. This, and the presence of unburned outer layer of carbon-oxygen may pose problems for SNIa spectra. Explosion sensitivity to initial conditions and its relation to a diversity of SNIa is discussed.

  1. Neutron radiography imaging with 2-dimensional photon counting method and its problems

    International Nuclear Information System (INIS)

    Ikeda, Y.; Kobayashi, H.; Niwa, T.; Kataoka, T.

    1988-01-01

    A ultra sensitive neutron imaging system has been deviced with a 2-dimensional photon counting camara (ARGUS 100). The imaging system is composed by a 2-dimensional single photon counting tube and a low background vidicon followed with an image processing unit and frame memories. By using the imaging system, electronic neutron radiography (NTV) has been possible under the neutron flux less than 3 x 10 4 n/cm 2 ·s. (author)

  2. An extension of the maximum principle to dimensional systems and its application in nuclear engineering problems

    International Nuclear Information System (INIS)

    Gilai, D.

    1976-01-01

    The Maximum Principle deals with optimization problems of systems, which are governed by ordinary differential equations, and which include constraints on the state and control variables. The development of nuclear engineering confronted the designers of reactors, shielding and other nuclear devices with many requests of optimization and savings and it was straight forward to use the Maximum Principle for solving optimization problems in nuclear engineering, in fact, it was widely used both structural concept design and dynamic control of nuclear systems. The main disadvantage of the Maximum Principle is that it is suitable only for systems which may be described by ordinary differential equations, e.g. one dimensional systems. In the present work, starting from the variational approach, the original Maximum Principle is extended to multidimensional systems, and the principle which has been derived, is of a more general form and is applicable to any system which can be defined by linear partial differential equations of any order. To check out the applicability of the extended principle, two examples are solved: the first in nuclear shield design, where the goal is to construct a shield around a neutron emitting source, using given materials, so that the total dose outside of the shielding boundaries is minimized, the second in material distribution design in the core of a power reactor, so that the power peak is minimised. For the second problem, an iterative method was developed. (B.G.)

  3. hp Spectral element methods for three dimensional elliptic problems

    Indian Academy of Sciences (India)

    elliptic boundary value problems on non-smooth domains in R3. For Dirichlet problems, ... of variable degree bounded by W. Let N denote the number of layers in the geomet- ric mesh ... We prove a stability theorem for mixed problems when the spectral element functions vanish ..... Applying Theorem 3.1,. ∫ r l. |Mu|2dx −.

  4. Problems on one-dimensionally disordered lattices, and reliability of structural analysis of liquids and amorphous solids

    International Nuclear Information System (INIS)

    Kakinoki, J.

    1974-01-01

    Methods for obtaining the intensity of X-ray diffraction by one-dimensional by disordered lattices have been studied, and matrix method was developed. The method has been applied for structural analysis. Several problems concerning neutron diffraction were shown in the course of analysis. Large single crystals should be used for measurement. It is hard to grasp the local variation of structure. The technique of topography is still in development. Measurement of weak intensity diffraction is not sufficient. Technique of photography to observe overall feature is not good. General remarks concerning the one-dimensionally disordered lattices are as follows. A large number of parameters for analysis are not practical, and the disorder parameters are preferably two. In case of the disorder between two kinds of layers having same frequency and different structure, peak shift is not caused, and Laue term remains at the position. Reliability of the structural analysis of liquid and amorphous solid is discussed. The analysis is basically the analysis two atom molecule of same kind of atoms. The intensity of diffraction can be obtained from radial distribution function (RDF). Since practical observation is limited to a finite region, termination effect should be taken into consideration. Accuracy of analysis is not good in case of X-ray diffraction. The analysis by neutron diffraction is preferable. (Kato, T.)

  5. High-dimensional free-space optical communications based on orbital angular momentum coding

    Science.gov (United States)

    Zou, Li; Gu, Xiaofan; Wang, Le

    2018-03-01

    In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.

  6. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    Science.gov (United States)

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Three-Dimensional Porous Nitrogen-Doped NiO Nanostructures as Highly Sensitive NO2 Sensors

    Directory of Open Access Journals (Sweden)

    Van Hoang Luan

    2017-10-01

    Full Text Available Nickel oxide has been widely used in chemical sensing applications, because it has an excellent p-type semiconducting property with high chemical stability. Here, we present a novel technique of fabricating three-dimensional porous nitrogen-doped nickel oxide nanosheets as a highly sensitive NO2 sensor. The elaborate nanostructure was prepared by a simple and effective hydrothermal synthesis method. Subsequently, nitrogen doping was achieved by thermal treatment with ammonia gas. When the p-type dopant, i.e., nitrogen atoms, was introduced in the three-dimensional nanostructures, the nickel-oxide-nanosheet-based sensor showed considerable NO2 sensing ability with two-fold higher responsivity and sensitivity compared to non-doped nickel-oxide-based sensors.

  8. Alcohol-Related Problems And High Risk Sexual Behaviour In ...

    African Journals Online (AJOL)

    There was a significant association between alcohol-related problems and risky sexual behavior. Alcohol-related problems are fairly common in people already infected with HIV/AIDS and are associated with high-risk sexual behavior. Thus, screening and treatment should be part of an effective HIV intervention program.

  9. Myth 15: High-Ability Students Don't Face Problems and Challenges

    Science.gov (United States)

    Moon, Sidney M.

    2009-01-01

    One rationale for failure to address the needs of high-ability students in schools is that high-ability students do not need special services because they do not face any special problems or challenges. A more extreme corollary of this attitude is the notion that high ability is so protective that students with high ability do not face problems or…

  10. Fabrication, Characterization, Properties, and Applications of Low-Dimensional BiFeO3 Nanostructures

    Directory of Open Access Journals (Sweden)

    Heng Wu

    2014-01-01

    Full Text Available Low-dimensional BiFeO3 nanostructures (e.g., nanocrystals, nanowires, nanotubes, and nanoislands have received considerable attention due to their novel size-dependent properties and outstanding multiferroic properties at room temperature. In recent years, much progress has been made both in fabrications and (microstructural, electrical, and magnetic in characterizations of BiFeO3 low-dimensional nanostructures. An overview of the state of art in BiFeO3 low-dimensional nanostructures is presented. First, we review the fabrications of high-quality BiFeO3 low-dimensional nanostructures via a variety of techniques, and then the structural characterizations and physical properties of the BiFeO3 low-dimensional nanostructures are summarized. Their potential applications in the next-generation magnetoelectric random access memories and photovoltaic devices are also discussed. Finally, we conclude this review by providing our perspectives to the future researches of BiFeO3 low-dimensional nanostructures and some key problems are also outlined.

  11. Lorentz covariant tempered distributions in two-dimensional space-time

    International Nuclear Information System (INIS)

    Zinov'ev, Yu.M.

    1989-01-01

    The problem of describing Lorentz covariant distributions without any spectral condition has hitherto remained unsolved even for two-dimensional space-time. Attempts to solve this problem have already been made. Zharinov obtained an integral representation for the Laplace transform of Lorentz invariant distributions with support in the product of two-dimensional future light cones. However, this integral representation does not make it possible to obtain a complete description of the corresponding Lorentz invariant distributions. In this paper the author gives a complete description of Lorentz covariant distributions for two-dimensional space-time. No spectral conditions is assumed

  12. Dimensional analysis and self-similarity methods for engineers and scientists

    CERN Document Server

    Zohuri, Bahman

    2015-01-01

    This ground-breaking reference provides an overview of key concepts in dimensional analysis, and then pushes well beyond traditional applications in fluid mechanics to demonstrate how powerful this tool can be in solving complex problems across many diverse fields. Of particular interest is the book's coverage of  dimensional analysis and self-similarity methods in nuclear and energy engineering. Numerous practical examples of dimensional problems are presented throughout, allowing readers to link the book's theoretical explanations and step-by-step mathematical solutions to practical impleme

  13. Minimal surfaces, stratified multivarifolds, and the plateau problem

    CERN Document Server

    Thi, Dao Trong; Primrose, E J F; Silver, Ben

    1991-01-01

    Plateau's problem is a scientific trend in modern mathematics that unites several different problems connected with the study of minimal surfaces. In its simplest version, Plateau's problem is concerned with finding a surface of least area that spans a given fixed one-dimensional contour in three-dimensional space--perhaps the best-known example of such surfaces is provided by soap films. From the mathematical point of view, such films are described as solutions of a second-order partial differential equation, so their behavior is quite complicated and has still not been thoroughly studied. Soap films, or, more generally, interfaces between physical media in equilibrium, arise in many applied problems in chemistry, physics, and also in nature. In applications, one finds not only two-dimensional but also multidimensional minimal surfaces that span fixed closed "contours" in some multidimensional Riemannian space. An exact mathematical statement of the problem of finding a surface of least area or volume requir...

  14. Hamilton-Jacobi-Bellman approach for the climbing problem for heavy launchers

    OpenAIRE

    Bokanowski , Olivier; Cristiani , Emiliano; Laurent-Varin , Julien; Zidani , Hasnaa

    2012-01-01

    International audience; In this paper we investigate the Hamilton-Jacobi-Bellman (HJB) approach for solving a complex real-world optimal control problem in high dimension. We consider the climbing problem for the European launcher Ariane V: The launcher has to reach the Geostationary Transfer Orbit with minimal propellant consumption under state/control constraints. In order to circumvent the well-known curse of dimensionality, we reduce the number of variables in the model exploiting the spe...

  15. Three-dimensional porous graphene-Co{sub 3}O{sub 4} nanocomposites for high performance photocatalysts

    Energy Technology Data Exchange (ETDEWEB)

    Bin, Zeng, E-mail: 21467855@qq.com [College of Mechanical Engineering, Hunan University of Arts and Science, Changde 415000 (China); Hui, Long [Department of Applied Physics and Materials Research Center, The Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong)

    2015-12-01

    Highlights: • The three-dimensional porous graphene-Co{sub 3}O{sub 4} nanocomposites were synthesized. • Excellent photocatalytic performance. • Separated from the reaction medium by magnetic decantation. - Abstract: Novel three-dimensional porous graphene-Co{sub 3}O{sub 4} nanocomposites were synthesized by freeze-drying methods. Scanning and transmission electron microscopy revealed that the graphene formed a three-dimensional porous structure with Co{sub 3}O{sub 4} nanoparticles decorated surfaces. The as-obtained product showed high photocatalytic efficiency and could be easily separated from the reaction medium by magnetic decantation. This nanocomposite may be expected to have potential in water purification applications.

  16. Pseudo-One-Dimensional Magnonic Crystals for High-Frequency Nanoscale Devices

    Science.gov (United States)

    Banerjee, Chandrima; Choudhury, Samiran; Sinha, Jaivardhan; Barman, Anjan

    2017-07-01

    The synthetic magnonic crystals (i.e., periodic composites consisting of different magnetic materials) form one fascinating class of emerging research field, which aims to command the process and flow of information by means of spin waves, such as in magnonic waveguides. One of the intriguing features of magnonic crystals is the presence and tunability of band gaps in the spin-wave spectrum, where the high attenuation of the frequency bands can be utilized for frequency-dependent control on the spin waves. However, to find a feasible way of band tuning in terms of a realistic integrated device is still a challenge. Here, we introduce an array of asymmetric saw-tooth-shaped width-modulated nanoscale ferromagnetic waveguides forming a pseudo-one-dimensional magnonic crystal. The frequency dispersion of collective modes measured by the Brillouin light-scattering technique is compared with the band diagram obtained by numerically solving the eigenvalue problem derived from the linearized Landau-Lifshitz magnetic torque equation. We find that the magnonic band-gap width, position, and the slope of dispersion curves are controllable by changing the angle between the spin-wave propagation channel and the magnetic field. The calculated profiles of the dynamic magnetization reveal that the corrugation at the lateral boundary of the waveguide effectively engineers the edge modes, which forms the basis of the interactive control in magnonic circuits. The results represent a prospective direction towards managing the internal field distribution as well as the dispersion properties, which find potential applications in dynamic spin-wave filters and magnonic waveguides in the gigahertz frequency range.

  17. High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1984-01-01

    Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)

  18. Semi-analog Monte Carlo (SMC) method for time-dependent non-linear three-dimensional heterogeneous radiative transfer problems

    International Nuclear Information System (INIS)

    Yun, Sung Hwan

    2004-02-01

    Radiative transfer is a complex phenomenon in which radiation field interacts with material. This thermal radiative transfer phenomenon is composed of two equations which are the balance equation of photons and the material energy balance equation. The two equations involve non-linearity due to the temperature and that makes the radiative transfer equation more difficult to solve. During the last several years, there have been many efforts to solve the non-linear radiative transfer problems by Monte Carlo method. Among them, it is known that Semi-Analog Monte Carlo (SMC) method developed by Ahrens and Larsen is accurate regard-less of the time step size in low temperature region. But their works are limited to one-dimensional, low temperature problems. In this thesis, we suggest some method to remove their limitations in the SMC method and apply to the more realistic problems. An initially cold problem was solved over entire temperature region by using piecewise linear interpolation of the heat capacity, while heat capacity is still fitted as a cubic curve within the lowest temperature region. If we assume the heat capacity to be linear in each temperature region, the non-linearity still remains in the radiative transfer equations. We then introduce the first-order Taylor expansion to linearize the non-linear radiative transfer equations. During the linearization procedure, absorption-reemission phenomena may be described by a conventional reemission time sampling scheme which is similar to the repetitive sampling scheme in particle transport Monte Carlo method. But this scheme causes significant stochastic errors, which necessitates many histories. Thus, we present a new reemission time sampling scheme which reduces stochastic errors by storing the information of absorption times. The results of the comparison of the two schemes show that the new scheme has less stochastic errors. Therefore, the improved SMC method is able to solve more realistic problems with

  19. K-FIX: a computer program for transient, two-dimensional, two-fluid flow. THREED: an extension of the K-FIX code for three-dimensional calculations

    International Nuclear Information System (INIS)

    Rivard, W.C.; Torrey, M.D.

    1978-10-01

    The transient, two-dimensional, two-fluid code K-FIX has been extended to perform three-dimensional calculations. This capability is achieved by adding five modification sets of FORTRAN statements to the basic two-dimensional code. The modifications are listed and described, and a complete listing of the three-dimensional code is provided. Results of an example problem are provided for verification

  20. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    Science.gov (United States)

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Dimensional measurement of micro parts with high aspect ratio in HIT-UOI

    Science.gov (United States)

    Dang, Hong; Cui, Jiwen; Feng, Kunpeng; Li, Junying; Zhao, Shiyuan; Zhang, Haoran; Tan, Jiubin

    2016-11-01

    Micro parts with high aspect ratios have been widely used in different fields including aerospace and defense industries, while the dimensional measurement of these micro parts becomes a challenge in the field of precision measurement and instrument. To deal with this contradiction, several probes for the micro parts precision measurement have been proposed by researchers in Center of Ultra-precision Optoelectronic Instrument (UOI), Harbin Institute of Technology (HIT). In this paper, optical fiber probes with structures of spherical coupling(SC) with double optical fibers, micro focal-length collimation (MFL-collimation) and fiber Bragg grating (FBG) are described in detail. After introducing the sensing principles, both advantages and disadvantages of these probes are analyzed respectively. In order to improve the performances of these probes, several approaches are proposed. A two-dimensional orthogonal path arrangement is propounded to enhance the dimensional measurement ability of MFL-collimation probes, while a high resolution and response speed interrogation method based on differential method is used to improve the accuracy and dynamic characteristics of the FBG probes. The experiments for these special structural fiber probes are given with a focus on the characteristics of these probes, and engineering applications will also be presented to prove the availability of them. In order to improve the accuracy and the instantaneity of the engineering applications, several techniques are used in probe integration. The effectiveness of these fiber probes were therefore verified through both the analysis and experiments.

  2. Hawking radiation of a high-dimensional rotating black hole

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)

    2010-01-15

    We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)

  3. An Integrated Approach to Parameter Learning in Infinite-Dimensional Space

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, Zachary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wendelberger, Joanne Roth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-14

    The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations, high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the

  4. Solution of the two-dimensional space-time reactor kinetics equation by a locally one-dimensional method

    International Nuclear Information System (INIS)

    Chen, G.S.; Christenson, J.M.

    1985-01-01

    In this paper, the authors present some initial results from an investigation of the application of a locally one-dimensional (LOD) finite difference method to the solution of the two-dimensional, two-group reactor kinetics equations. Although the LOD method is relatively well known, it apparently has not been previously applied to the space-time kinetics equations. In this investigation, the LOD results were benchmarked against similar computational results (using the same computing environment, the same programming structure, and the same sample problems) obtained by the TWIGL program. For all of the problems considered, the LOD method provided accurate results in one-half to one-eight of the time required by the TWIGL program

  5. Solution of two-dimensional electromagnetic scattering problem by FDTD with optimal step size, based on a semi-norm analysis

    International Nuclear Information System (INIS)

    Monsefi, Farid; Carlsson, Linus; Silvestrov, Sergei; Rančić, Milica; Otterskog, Magnus

    2014-01-01

    To solve the electromagnetic scattering problem in two dimensions, the Finite Difference Time Domain (FDTD) method is used. The order of convergence of the FDTD algorithm, solving the two-dimensional Maxwell’s curl equations, is estimated in two different computer implementations: with and without an obstacle in the numerical domain of the FDTD scheme. This constitutes an electromagnetic scattering problem where a lumped sinusoidal current source, as a source of electromagnetic radiation, is included inside the boundary. Confined within the boundary, a specific kind of Absorbing Boundary Condition (ABC) is chosen and the outside of the boundary is in form of a Perfect Electric Conducting (PEC) surface. Inserted in the computer implementation, a semi-norm has been applied to compare different step sizes in the FDTD scheme. First, the domain of the problem is chosen to be the free-space without any obstacles. In the second part of the computer implementations, a PEC surface is included as the obstacle. The numerical instability of the algorithms can be rather easily avoided with respect to the Courant stability condition, which is frequently used in applying the general FDTD algorithm

  6. Solution of two-dimensional electromagnetic scattering problem by FDTD with optimal step size, based on a semi-norm analysis

    Energy Technology Data Exchange (ETDEWEB)

    Monsefi, Farid [Division of Applied Mathematics, The School of Education, Culture and Communication, Mälardalen University, MDH, Västerås, Sweden and School of Innovation, Design and Engineering, IDT, Mälardalen University, MDH Väs (Sweden); Carlsson, Linus; Silvestrov, Sergei [Division of Applied Mathematics, The School of Education, Culture and Communication, Mälardalen University, MDH, Västerås (Sweden); Rančić, Milica [Division of Applied Mathematics, The School of Education, Culture and Communication, Mälardalen University, MDH, Västerås, Sweden and Department of Theoretical Electrical Engineering, Faculty of Electronic Engineering, University (Serbia); Otterskog, Magnus [School of Innovation, Design and Engineering, IDT, Mälardalen University, MDH Västerås (Sweden)

    2014-12-10

    To solve the electromagnetic scattering problem in two dimensions, the Finite Difference Time Domain (FDTD) method is used. The order of convergence of the FDTD algorithm, solving the two-dimensional Maxwell’s curl equations, is estimated in two different computer implementations: with and without an obstacle in the numerical domain of the FDTD scheme. This constitutes an electromagnetic scattering problem where a lumped sinusoidal current source, as a source of electromagnetic radiation, is included inside the boundary. Confined within the boundary, a specific kind of Absorbing Boundary Condition (ABC) is chosen and the outside of the boundary is in form of a Perfect Electric Conducting (PEC) surface. Inserted in the computer implementation, a semi-norm has been applied to compare different step sizes in the FDTD scheme. First, the domain of the problem is chosen to be the free-space without any obstacles. In the second part of the computer implementations, a PEC surface is included as the obstacle. The numerical instability of the algorithms can be rather easily avoided with respect to the Courant stability condition, which is frequently used in applying the general FDTD algorithm.

  7. A two-dimensional, finite-element methods for calculating TF coil response to out-of-plane Lorentz forces

    International Nuclear Information System (INIS)

    Witt, R.J.

    1989-01-01

    Toroidal field (TF) coils in fusion systems are routinely operated at very high magnetic fields. While obtaining the response of the coil to in-plane loads is relatively straightforward, the same is not true for the out-of-plane loads. Previous treatments of the out-of-plane problem have involved large, three-dimensional finite element idealizations. A new treatment of the out-of-plane problem is presented here; the model is two-dimensional in nature, and consumes far less CPU-time than three-dimensional methods. The approach assumes there exists a region of torsional deformation in the inboard leg and a bending region in the outboard leg. It also assumes the outboard part of the coil is attached to a torque frame/cylinder, which experiences primarily torsional deformation. Three-dimensional transition regions exist between the inboard and outboard legs and between the outboard leg and the torque frame. By considering several idealized problems of cylindrical shells subjected to moment distributions, it is shown that the size of these three-dimensional regions is quite small, and that the interaction between the torsional and bending regions can be treated in an equivalent two-dimensional fashion. Equivalent stiffnesses are derived to model penetration into and twist along the cylinders. These stiffnesses are then used in a special substructuring analysis to couple the three regions together. Results from the new method are compared to results from a 3D continuum model. (orig.)

  8. The Figured Worlds of High School Science Teachers: Uncovering Three-Dimensional Assessment Decisions

    Science.gov (United States)

    Ewald, Megan

    As a result of recent mandates of the Next Generation Science Standards, assessments are a "system of meaning" amidst a paradigm shift toward three-dimensional assessments. This study is motivated by two research questions: 1) how do high school science teachers describe their processes of decision-making in the development and use of three-dimensional assessments and 2) how do high school science teachers negotiate their identities as assessors in designing three-dimensional assessments. An important factor in teachers' assessment decision making is how they identify themselves as assessors. Therefore, this study investigated the teachers' roles as assessors through the Sociocultural Identity Theory. The most important contribution from this study is the emergent teacher assessment sub-identities: the modifier-recycler , the feeler-finder, and the creator. Using a qualitative phenomenological research design, focus groups, three-series interviews, think-alouds, and document analysis were utilized in this study. These qualitative methods were chosen to elicit rich conversations among teachers, make meaning of the teachers' experiences through in-depth interviews, amplify the thought processes of individual teachers while making assessment decisions, and analyze assessment documents in relation to teachers' perspectives. The findings from this study suggest that--of the 19 participants--only two teachers could consistently be identified as creators and aligned their assessment practices with NGSS. However, assessment sub-identities are not static and teachers may negotiate their identities from one moment to the next within socially constructed realms of interpretation known as figured worlds. Because teachers are positioned in less powerful figured worlds within the dominant discourse of standardization, this study raises awareness as to how the external pressures from more powerful figured worlds socially construct teachers' identities as assessors. For teachers

  9. A One-Dimensional Thermoelastic Problem due to a Moving Heat Source under Fractional Order Theory of Thermoelasticity

    Directory of Open Access Journals (Sweden)

    Tianhu He

    2014-01-01

    Full Text Available The dynamic response of a one-dimensional problem for a thermoelastic rod with finite length is investigated in the context of the fractional order theory of thermoelasticity in the present work. The rod is fixed at both ends and subjected to a moving heat source. The fractional order thermoelastic coupled governing equations for the rod are formulated. Laplace transform as well as its numerical inversion is applied to solving the governing equations. The variations of the considered temperature, displacement, and stress in the rod are obtained and demonstrated graphically. The effects of time, velocity of the moving heat source, and fractional order parameter on the distributions of the considered variables are of concern and discussed in detail.

  10. An asymptotic analytical solution to the problem of two moving boundaries with fractional diffusion in one-dimensional drug release devices

    International Nuclear Information System (INIS)

    Yin Chen; Xu Mingyu

    2009-01-01

    We set up a one-dimensional mathematical model with a Caputo fractional operator of a drug released from a polymeric matrix that can be dissolved into a solvent. A two moving boundaries problem in fractional anomalous diffusion (in time) with order α element of (0, 1] under the assumption that the dissolving boundary can be dissolved slowly is presented in this paper. The two-parameter regular perturbation technique and Fourier and Laplace transform methods are used. A dimensionless asymptotic analytical solution is given in terms of the Wright function

  11. Sparse Learning of the Disease Severity Score for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Ivan Stojkovic

    2017-01-01

    Full Text Available Learning disease severity scores automatically from collected measurements may aid in the quality of both healthcare and scientific understanding. Some steps in that direction have been taken and machine learning algorithms for extracting scoring functions from data have been proposed. Given the rapid increase in both quantity and diversity of data measured and stored, the large amount of information is becoming one of the challenges for learning algorithms. In this work, we investigated the direction of the problem where the dimensionality of measured variables is large. Learning the severity score in such cases brings the issue of which of measured features are relevant. We have proposed a novel approach by combining desirable properties of existing formulations, which compares favorably to alternatives in accuracy and especially in the robustness of the learned scoring function. The proposed formulation has a nonsmooth penalty that induces sparsity. This problem is solved by addressing a dual formulation which is smooth and allows an efficient optimization. The proposed approach might be used as an effective and reliable tool for both scoring function learning and biomarker discovery, as demonstrated by identifying a stable set of genes related to influenza symptoms’ severity, which are enriched in immune-related processes.

  12. Best Known Problem Solving Strategies in "High-Stakes" Assessments

    Science.gov (United States)

    Hong, Dae S.

    2011-01-01

    In its mathematics standards, National Council of Teachers of Mathematics (NCTM) states that problem solving is an integral part of all mathematics learning and exposure to problem solving strategies should be embedded across the curriculum. Furthermore, by high school, students should be able to use, decide and invent a wide range of strategies.…

  13. AucPR: An AUC-based approach using penalized regression for disease prediction with high-dimensional omics data

    OpenAIRE

    Yu, Wenbao; Park, Taesung

    2014-01-01

    Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...

  14. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  15. Pericles and Attila results for the C5G7 MOX benchmark problems

    International Nuclear Information System (INIS)

    Wareing, T.A.; McGhee, J.M.

    2002-01-01

    Recently the Nuclear Energy Agency has published a new benchmark entitled, 'C5G7 MOX Benchmark.' This benchmark is to test the ability of current transport codes to treat reactor core problems without spatial homogenization. The benchmark includes both a two- and three-dimensional problem. We have calculated results for these benchmark problems with our Pericles and Attila codes. Pericles is a one-,two-, and three-dimensional unstructured grid discrete-ordinates code and was used for the twodimensional benchmark problem. Attila is a three-dimensional unstructured tetrahedral mesh discrete-ordinate code and was used for the three-dimensional problem. Both codes use discontinuous finite element spatial differencing. Both codes use diffusion synthetic acceleration (DSA) for accelerating the inner iterations.

  16. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    International Nuclear Information System (INIS)

    Snyder, Abigail C.; Jiao, Yu

    2010-01-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  17. Method of dimensionality reduction in contact mechanics and friction

    CERN Document Server

    Popov, Valentin L

    2015-01-01

    This book describes for the first time a simulation method for the fast calculation of contact properties and friction between rough surfaces in a complete form. In contrast to existing simulation methods, the method of dimensionality reduction (MDR) is based on the exact mapping of various types of three-dimensional contact problems onto contacts of one-dimensional foundations. Within the confines of MDR, not only are three dimensional systems reduced to one-dimensional, but also the resulting degrees of freedom are independent from another. Therefore, MDR results in an enormous reduction of the development time for the numerical implementation of contact problems as well as the direct computation time and can ultimately assume a similar role in tribology as FEM has in structure mechanics or CFD methods, in hydrodynamics. Furthermore, it substantially simplifies analytical calculation and presents a sort of “pocket book edition” of the entirety contact mechanics. Measurements of the rheology of bodies in...

  18. NATO Advanced Study Institute on Low-dimensional Cooperative Phenomena : the Possibility of High-Temperature Superconductivity

    CERN Document Server

    1975-01-01

    Theoretical and experimental work on solids with low-dimensional cooperative phenomena has been rather explosively expanded in the last few years, and it seems to be quite fashionable to con­ tribute to this field, especially to the problem of one-dimensional metals. On the whole, one could divide the huge amount of recent investigations into two parts although there is much overlap bet­ ween these regimes, namely investigations on magnetic exchange interactions constrained to mainly one or two dimensions and, secondly, work done on Id metallic solids or linear chain compounds with Id delocalized electrons. There is, of course, overlap from one extreme case to the other with these solids and in some rare cases both phenomena are studied on one and the same crystal. In fact, however, most of the scientific groups in this area could be associated roughly with one of these categories and,in addition, a separation between theoreticians and experimentalists in each of these groups leads to a further splitting of...

  19. A High Performance Banknote Recognition System Based on a One-Dimensional Visible Light Line Sensor.

    Science.gov (United States)

    Park, Young Ho; Kwon, Seung Yong; Pham, Tuyen Danh; Park, Kang Ryoung; Jeong, Dae Sik; Yoon, Sungsoo

    2015-06-15

    An algorithm for recognizing banknotes is required in many fields, such as banknote-counting machines and automatic teller machines (ATM). Due to the size and cost limitations of banknote-counting machines and ATMs, the banknote image is usually captured by a one-dimensional (line) sensor instead of a conventional two-dimensional (area) sensor. Because the banknote image is captured by the line sensor while it is moved at fast speed through the rollers inside the banknote-counting machine or ATM, misalignment, geometric distortion, and non-uniform illumination of the captured images frequently occur, which degrades the banknote recognition accuracy. To overcome these problems, we propose a new method for recognizing banknotes. The experimental results using two-fold cross-validation for 61,240 United States dollar (USD) images show that the pre-classification error rate is 0%, and the average error rate for the final recognition of the USD banknotes is 0.114%.

  20. A High Performance Banknote Recognition System Based on a One-Dimensional Visible Light Line Sensor

    Directory of Open Access Journals (Sweden)

    Young Ho Park

    2015-06-01

    Full Text Available An algorithm for recognizing banknotes is required in many fields, such as banknote-counting machines and automatic teller machines (ATM. Due to the size and cost limitations of banknote-counting machines and ATMs, the banknote image is usually captured by a one-dimensional (line sensor instead of a conventional two-dimensional (area sensor. Because the banknote image is captured by the line sensor while it is moved at fast speed through the rollers inside the banknote-counting machine or ATM, misalignment, geometric distortion, and non-uniform illumination of the captured images frequently occur, which degrades the banknote recognition accuracy. To overcome these problems, we propose a new method for recognizing banknotes. The experimental results using two-fold cross-validation for 61,240 United States dollar (USD images show that the pre-classification error rate is 0%, and the average error rate for the final recognition of the USD banknotes is 0.114%.

  1. 3-dimensional analysis of FELIX brick with hole

    International Nuclear Information System (INIS)

    Lee, Taek-Kyung; Lee, Soo-Young; Ra, Jung-Woong

    1987-01-01

    Electromagnetic induction on FELIX brick with a hole has been analyzed with 3-Dimensional EDDYNET computer code. Incorporating loop currents on hexahedral meshes, the 3-Dimensional EDDYNET program solves eddy current problems by a network approach, and provides good accuracy even for coarse meshes. (author)

  2. Fast solution of neutron diffusion problem by reduced basis finite element method

    International Nuclear Information System (INIS)

    Chunyu, Zhang; Gong, Chen

    2018-01-01

    Highlights: •An extremely efficient method is proposed to solve the neutron diffusion equation with varying the cross sections. •Three orders of speedup is achieved for IAEA benchmark problems. •The method may open a new possibility of efficient high-fidelity modeling of large scale problems in nuclear engineering. -- Abstract: For the important applications which need carry out many times of neutron diffusion calculations such as the fuel depletion analysis and the neutronics-thermohydraulics coupling analysis, fast and accurate solutions of the neutron diffusion equation are demanding but necessary. In the present work, the certified reduced basis finite element method is proposed and implemented to solve the generalized eigenvalue problems of neutron diffusion with variable cross sections. The order reduced model is built upon high-fidelity finite element approximations during the offline stage. During the online stage, both the k eff and the spatical distribution of neutron flux can be obtained very efficiently for any given set of cross sections. Numerical tests show that a speedup of around 1100 is achieved for the IAEA two-dimensional PWR benchmark problem and a speedup of around 3400 is achieved for the three-dimensional counterpart with the fission cross-sections, the absorption cross-sections and the scattering cross-sections treated as parameters.

  3. Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations

    Science.gov (United States)

    Garrett, Karen A.; Allison, David B.

    2015-01-01

    Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106

  4. Challenges and approaches to statistical design and inference in high-dimensional investigations.

    Science.gov (United States)

    Gadbury, Gary L; Garrett, Karen A; Allison, David B

    2009-01-01

    Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.

  5. Two-dimensional ferroelectrics

    Energy Technology Data Exchange (ETDEWEB)

    Blinov, L M; Fridkin, Vladimir M; Palto, Sergei P [A.V. Shubnikov Institute of Crystallography, Russian Academy of Sciences, Moscow, Russian Federaion (Russian Federation); Bune, A V; Dowben, P A; Ducharme, Stephen [Department of Physics and Astronomy, Behlen Laboratory of Physics, Center for Materials Research and Analysis, University of Nebraska-Linkoln, Linkoln, NE (United States)

    2000-03-31

    The investigation of the finite-size effect in ferroelectric crystals and films has been limited by the experimental conditions. The smallest demonstrated ferroelectric crystals had a diameter of {approx}200 A and the thinnest ferroelectric films were {approx}200 A thick, macroscopic sizes on an atomic scale. Langmuir-Blodgett deposition of films one monolayer at a time has produced high quality ferroelectric films as thin as 10 A, made from polyvinylidene fluoride and its copolymers. These ultrathin films permitted the ultimate investigation of finite-size effects on the atomic thickness scale. Langmuir-Blodgett films also revealed the fundamental two-dimensional character of ferroelectricity in these materials by demonstrating that there is no so-called critical thickness; films as thin as two monolayers (1 nm) are ferroelectric, with a transition temperature near that of the bulk material. The films exhibit all the main properties of ferroelectricity with a first-order ferroelectric-paraelectric phase transition: polarization hysteresis (switching); the jump in spontaneous polarization at the phase transition temperature; thermal hysteresis in the polarization; the increase in the transition temperature with applied field; double hysteresis above the phase transition temperature; and the existence of the ferroelectric critical point. The films also exhibit a new phase transition associated with the two-dimensional layers. (reviews of topical problems)

  6. Denoising and dimensionality reduction of genomic data

    Science.gov (United States)

    Capobianco, Enrico

    2005-05-01

    Genomics represents a challenging research field for many quantitative scientists, and recently a vast variety of statistical techniques and machine learning algorithms have been proposed and inspired by cross-disciplinary work with computational and systems biologists. In genomic applications, the researcher deals with noisy and complex high-dimensional feature spaces; a wealth of genes whose expression levels are experimentally measured, can often be observed for just a few time points, thus limiting the available samples. This unbalanced combination suggests that it might be hard for standard statistical inference techniques to come up with good general solutions, likewise for machine learning algorithms to avoid heavy computational work. Thus, one naturally turns to two major aspects of the problem: sparsity and intrinsic dimensionality. These two aspects are studied in this paper, where for both denoising and dimensionality reduction, a very efficient technique, i.e., Independent Component Analysis, is used. The numerical results are very promising, and lead to a very good quality of gene feature selection, due to the signal separation power enabled by the decomposition technique. We investigate how the use of replicates can improve these results, and deal with noise through a stabilization strategy which combines the estimated components and extracts the most informative biological information from them. Exploiting the inherent level of sparsity is a key issue in genetic regulatory networks, where the connectivity matrix needs to account for the real links among genes and discard many redundancies. Most experimental evidence suggests that real gene-gene connections represent indeed a subset of what is usually mapped onto either a huge gene vector or a typically dense and highly structured network. Inferring gene network connectivity from the expression levels represents a challenging inverse problem that is at present stimulating key research in biomedical

  7. Improving Problem-Solving Skills with the Help of Plane-Space Analogies

    Science.gov (United States)

    Budai, László

    2013-01-01

    We live our lives in three-dimensional space and encounter geometrical problems (equipment instructions, maps, etc.) every day. Yet there are not sufficient opportunities for high school students to learn geometry. New teaching methods can help remedy this. Specifically our experience indicates that there is great promise for use of geometry…

  8. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  9. Some improvements to the solution of Stefan-like problems

    International Nuclear Information System (INIS)

    El-Genk, M.S.; Cronenberg, W.

    1979-01-01

    Two approximate analytical methods are developed for solving one-dimensional transient heat-conduction problems with phase transformation, where the growth rate of a frozen crust (layer) on a cold wall is sought. Both provide an accurate prediction of the instantaneous position of the moving boundary as applied to one dimensional melting and freezing problems. (author)

  10. A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT

    International Nuclear Information System (INIS)

    S. GOLUOGLU, C. BENTLEY, R. DEMEGLIO, M. DUNN, K. NORTON, R. PEVEY I.SUSLOV AND H.L. DODDS

    1998-01-01

    A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems

  11. High-dimensional data: p >> n in mathematical statistics and bio-medical applications

    OpenAIRE

    Van De Geer, Sara A.; Van Houwelingen, Hans C.

    2004-01-01

    The workshop 'High-dimensional data: p >> n in mathematical statistics and bio-medical applications' was held at the Lorentz Center in Leiden from 9 to 20 September 2002. This special issue of Bernoulli contains a selection of papers presented at that workshop. ¶ The introduction of high-throughput micro-array technology to measure gene-expression levels and the publication of the pioneering paper by Golub et al. (1999) has brought to life a whole new branch of data analysis under the name of...

  12. TripAdvisor^{N-D}: A Tourism-Inspired High-Dimensional Space Exploration Framework with Overview and Detail.

    Science.gov (United States)

    Nam, Julia EunJu; Mueller, Klaus

    2013-02-01

    Gaining a true appreciation of high-dimensional space remains difficult since all of the existing high-dimensional space exploration techniques serialize the space travel in some way. This is not so foreign to us since we, when traveling, also experience the world in a serial fashion. But we typically have access to a map to help with positioning, orientation, navigation, and trip planning. Here, we propose a multivariate data exploration tool that compares high-dimensional space navigation with a sightseeing trip. It decomposes this activity into five major tasks: 1) Identify the sights: use a map to identify the sights of interest and their location; 2) Plan the trip: connect the sights of interest along a specifyable path; 3) Go on the trip: travel along the route; 4) Hop off the bus: experience the location, look around, zoom into detail; and 5) Orient and localize: regain bearings in the map. We describe intuitive and interactive tools for all of these tasks, both global navigation within the map and local exploration of the data distributions. For the latter, we describe a polygonal touchpad interface which enables users to smoothly tilt the projection plane in high-dimensional space to produce multivariate scatterplots that best convey the data relationships under investigation. Motion parallax and illustrative motion trails aid in the perception of these transient patterns. We describe the use of our system within two applications: 1) the exploratory discovery of data configurations that best fit a personal preference in the presence of tradeoffs and 2) interactive cluster analysis via cluster sculpting in N-D.

  13. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    Directory of Open Access Journals (Sweden)

    András Király

    2014-01-01

    Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.

  14. Drying shrinkage problems in high-plastic clay soils in Oklahoma.

    Science.gov (United States)

    2013-08-01

    Longitudinal cracking in pavements due to drying shrinkage of high-plastic subgrade soils has been a major : problem in Oklahoma. Annual maintenance to seal and repair these distress problems costs significant amount of : money to the state. The long...

  15. Analysis of Operating Performance and Three Dimensional Magnetic Field of High Voltage Induction Motors with Stator Chute

    Directory of Open Access Journals (Sweden)

    WANG Qing-shan

    2017-06-01

    Full Text Available In view of the difficulties on technology of rotor chute in high voltage induction motor,the desig method adopted stator chute structure is put forward. The mathematical model of three dimensional nonlinear transient field for solving stator chute in high voltage induction motor is set up. Through the three dimensional entity model of motor,three dimensional finite element method based on T,ψ - ψ electromagnetic potential is adopted for the analysis and calculation of stator chute in high voltage induction motor under rated condition. The distributions long axial of fundamental wave magnetic field and tooth harmonic wave magnetic field are analyzed after stator chute,and the weakening effects on main tooth harmonic magnetic field are researched. Further more,the comparison analysis of main performance parameters of chute and straight slot is carried out under rated condition. The results show that the electrical performance of stator chute is better than that of straight slot in high voltage induction motor,and the tooth harmonic has been sharply decreased

  16. Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2009-01-01

    Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold

  17. Pure Cs4PbBr6: Highly Luminescent Zero-Dimensional Perovskite Solids

    KAUST Repository

    Saidaminov, Makhsud I.

    2016-09-26

    So-called zero-dimensional perovskites, such as Cs4PbBr6, promise outstanding emissive properties. However, Cs4PbBr6 is mostly prepared by melting of precursors that usually leads to a coformation of undesired phases. Here, we report a simple low-temperature solution-processed synthesis of pure Cs4PbBr6 with remarkable emission properties. We found that pure Cs4PbBr6 in solid form exhibits a 45% photoluminescence quantum yield (PLQY), in contrast to its three-dimensional counterpart, CsPbBr3, which exhibits more than 2 orders of magnitude lower PLQY. Such a PLQY of Cs4PbBr6 is significantly higher than that of other solid forms of lower-dimensional metal halide perovskite derivatives and perovskite nanocrystals. We attribute this dramatic increase in PL to the high exciton binding energy, which we estimate to be ∼353 meV, likely induced by the unique Bergerhoff–Schmitz–Dumont-type crystal structure of Cs4PbBr6, in which metal-halide-comprised octahedra are spatially confined. Our findings bring this class of perovskite derivatives to the forefront of color-converting and light-emitting applications.

  18. Stable Graphene-Two-Dimensional Multiphase Perovskite Heterostructure Phototransistors with High Gain.

    Science.gov (United States)

    Shao, Yuchuan; Liu, Ye; Chen, Xiaolong; Chen, Chen; Sarpkaya, Ibrahim; Chen, Zhaolai; Fang, Yanjun; Kong, Jaemin; Watanabe, Kenji; Taniguchi, Takashi; Taylor, André; Huang, Jinsong; Xia, Fengnian

    2017-12-13

    Recently, two-dimensional (2D) organic-inorganic perovskites emerged as an alternative material for their three-dimensional (3D) counterparts in photovoltaic applications with improved moisture resistance. Here, we report a stable, high-gain phototransistor consisting of a monolayer graphene on hexagonal boron nitride (hBN) covered by a 2D multiphase perovskite heterostructure, which was realized using a newly developed two-step ligand exchange method. In this phototransistor, the multiple phases with varying bandgap in 2D perovskite thin films are aligned for the efficient electron-hole pair separation, leading to a high responsivity of ∼10 5 A W -1 at 532 nm. Moreover, the designed phase alignment method aggregates more hydrophobic butylammonium cations close to the upper surface of the 2D perovskite thin film, preventing the permeation of moisture and enhancing the device stability dramatically. In addition, faster photoresponse and smaller 1/f noise observed in the 2D perovskite phototransistors indicate a smaller density of deep hole traps in the 2D perovskite thin film compared with their 3D counterparts. These desirable properties not only improve the performance of the phototransistor, but also provide a new direction for the future enhancement of the efficiency of 2D perovskite photovoltaics.

  19. High-accuracy optical extensometer based on coordinate transform in two-dimensional digital image correlation

    Science.gov (United States)

    Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan

    2018-01-01

    In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.

  20. Integration of fringe projection and two-dimensional digital image correlation for three-dimensional displacements measurements

    Science.gov (United States)

    Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.

    2016-12-01

    A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.