WorldWideScience

Sample records for high dimensional stochastic

  1. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Science.gov (United States)

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  2. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    Directory of Open Access Journals (Sweden)

    Georgios Arampatzis

    Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of

  3. Probabilistic numerical methods for high-dimensional stochastic control and valuation problems on electricity markets

    International Nuclear Information System (INIS)

    Langrene, Nicolas

    2014-01-01

    This thesis deals with the numerical solution of general stochastic control problems, with notable applications for electricity markets. We first propose a structural model for the price of electricity, allowing for price spikes well above the marginal fuel price under strained market conditions. This model allows to price and partially hedge electricity derivatives, using fuel forwards as hedging instruments. Then, we propose an algorithm, which combines Monte-Carlo simulations with local basis regressions, to solve general optimal switching problems. A comprehensive rate of convergence of the method is provided. Moreover, we manage to make the algorithm parsimonious in memory (and hence suitable for high dimensional problems) by generalizing to this framework a memory reduction method that avoids the storage of the sample paths. We illustrate this on the problem of investments in new power plants (our structural power price model allowing the new plants to impact the price of electricity). Finally, we study more general stochastic control problems (the control can be continuous and impact the drift and volatility of the state process), the solutions of which belong to the class of fully nonlinear Hamilton-Jacobi-Bellman equations, and can be handled via constrained Backward Stochastic Differential Equations, for which we develop a backward algorithm based on control randomization and parametric optimizations. A rate of convergence between the constraPned BSDE and its discrete version is provided, as well as an estimate of the optimal control. This algorithm is then applied to the problem of super replication of options under uncertain volatilities (and correlations). (author)

  4. Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale

    Energy Technology Data Exchange (ETDEWEB)

    Zabaras, Nicolas J. [Cornell Univ., Ithaca, NY (United States)

    2016-11-08

    Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.

  5. Stochastic and infinite dimensional analysis

    CERN Document Server

    Carpio-Bernido, Maria; Grothaus, Martin; Kuna, Tobias; Oliveira, Maria; Silva, José

    2016-01-01

    This volume presents a collection of papers covering applications from a wide range of systems with infinitely many degrees of freedom studied using techniques from stochastic and infinite dimensional analysis, e.g. Feynman path integrals, the statistical mechanics of polymer chains, complex networks, and quantum field theory. Systems of infinitely many degrees of freedom create their particular mathematical challenges which have been addressed by different mathematical theories, namely in the theories of stochastic processes, Malliavin calculus, and especially white noise analysis. These proceedings are inspired by a conference held on the occasion of Prof. Ludwig Streit’s 75th birthday and celebrate his pioneering and ongoing work in these fields.

  6. Patterns of Stochastic Behavior in Dynamically Unstable High-Dimensional Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Simon Rosenfeld

    2009-01-01

    Full Text Available The question of dynamical stability and stochastic behavior of large biochemical networks is discussed. It is argued that stringent conditions of asymptotic stability have very little chance to materialize in a multidimensional system described by the differential equations of chemical kinetics. The reason is that the criteria of asymptotic stability (Routh- Hurwitz, Lyapunov criteria, Feinberg’s Deficiency Zero theorem would impose the limitations of very high algebraic order on the kinetic rates and stoichiometric coefficients, and there are no natural laws that would guarantee their unconditional validity. Highly nonlinear, dynamically unstable systems, however, are not necessarily doomed to collapse, as a simple Jacobian analysis would suggest. It is possible that their dynamics may assume the form of pseudo-random fluctuations quite similar to a shot noise, and, therefore, their behavior may be described in terms of Langevin and Fokker-Plank equations. We have shown by simulation that the resulting pseudo-stochastic processes obey the heavy-tailed Generalized Pareto Distribution with temporal sequence of pulses forming the set of constituent-specific Poisson processes. Being applied to intracellular dynamics, these properties are naturally associated with burstiness, a well documented phenomenon in the biology of gene expression.

  7. Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids

    International Nuclear Information System (INIS)

    Jakeman, John D.; Archibald, Richard; Xiu Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  8. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca

    2013-01-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  9. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza

    2013-08-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  10. Transport stochastic multi-dimensional media

    International Nuclear Information System (INIS)

    Haran, O.; Shvarts, D.

    1996-01-01

    Many physical phenomena evolve according to known deterministic rules, but in a stochastic media in which the composition changes in space and time. Examples to such phenomena are heat transfer in turbulent atmosphere with non uniform diffraction coefficients, neutron transfer in boiling coolant of a nuclear reactor and radiation transfer through concrete shields. The results of measurements conducted upon such a media are stochastic by nature, and depend on the specific realization of the media. In the last decade there has been a considerable efforts to describe linear particle transport in one dimensional stochastic media composed of several immiscible materials. However, transport in two or three dimensional stochastic media has been rarely addressed. The important effect in multi-dimensional transport that does not appear in one dimension is the ability to bypass obstacles. The current work is an attempt to quantify this effect. (authors)

  11. Transport stochastic multi-dimensional media

    Energy Technology Data Exchange (ETDEWEB)

    Haran, O; Shvarts, D [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev; Thiberger, R [Ben-Gurion Univ. of the Negev, Beersheba (Israel)

    1996-12-01

    Many physical phenomena evolve according to known deterministic rules, but in a stochastic media in which the composition changes in space and time. Examples to such phenomena are heat transfer in turbulent atmosphere with non uniform diffraction coefficients, neutron transfer in boiling coolant of a nuclear reactor and radiation transfer through concrete shields. The results of measurements conducted upon such a media are stochastic by nature, and depend on the specific realization of the media. In the last decade there has been a considerable efforts to describe linear particle transport in one dimensional stochastic media composed of several immiscible materials. However, transport in two or three dimensional stochastic media has been rarely addressed. The important effect in multi-dimensional transport that does not appear in one dimension is the ability to bypass obstacles. The current work is an attempt to quantify this effect. (authors).

  12. Modeling and Simulation of High Dimensional Stochastic Multiscale PDE Systems at the Exascale

    Energy Technology Data Exchange (ETDEWEB)

    Kevrekidis, Ioannis [Princeton Univ., NJ (United States)

    2017-03-22

    The thrust of the proposal was to exploit modern data-mining tools in a way that will create a systematic, computer-assisted approach to the representation of random media -- and also to the representation of the solutions of an array of important physicochemical processes that take place in/on such media. A parsimonious representation/parametrization of the random media links directly (via uncertainty quantification tools) to good sampling of the distribution of random media realizations. It also links directly to modern multiscale computational algorithms (like the equation-free approach that has been developed in our group) and plays a crucial role in accelerating the scientific computation of solutions of nonlinear PDE models (deterministic or stochastic) in such media – both solutions in particular realizations of the random media, and estimation of the statistics of the solutions over multiple realizations (e.g. expectations).

  13. Tangent map intermittency as an approximate analysis of intermittency in a high dimensional fully stochastic dynamical system: The Tangled Nature model.

    Science.gov (United States)

    Diaz-Ruelas, Alvaro; Jeldtoft Jensen, Henrik; Piovani, Duccio; Robledo, Alberto

    2016-12-01

    It is well known that low-dimensional nonlinear deterministic maps close to a tangent bifurcation exhibit intermittency and this circumstance has been exploited, e.g., by Procaccia and Schuster [Phys. Rev. A 28, 1210 (1983)], to develop a general theory of 1/f spectra. This suggests it is interesting to study the extent to which the behavior of a high-dimensional stochastic system can be described by such tangent maps. The Tangled Nature (TaNa) Model of evolutionary ecology is an ideal candidate for such a study, a significant model as it is capable of reproducing a broad range of the phenomenology of macroevolution and ecosystems. The TaNa model exhibits strong intermittency reminiscent of punctuated equilibrium and, like the fossil record of mass extinction, the intermittency in the model is found to be non-stationary, a feature typical of many complex systems. We derive a mean-field version for the evolution of the likelihood function controlling the reproduction of species and find a local map close to tangency. This mean-field map, by our own local approximation, is able to describe qualitatively only one episode of the intermittent dynamics of the full TaNa model. To complement this result, we construct a complete nonlinear dynamical system model consisting of successive tangent bifurcations that generates time evolution patterns resembling those of the full TaNa model in macroscopic scales. The switch from one tangent bifurcation to the next in the sequences produced in this model is stochastic in nature, based on criteria obtained from the local mean-field approximation, and capable of imitating the changing set of types of species and total population in the TaNa model. The model combines full deterministic dynamics with instantaneous parameter random jumps at stochastically drawn times. In spite of the limitations of our approach, which entails a drastic collapse of degrees of freedom, the description of a high-dimensional model system in terms of a low-dimensional

  14. Orthogonality preserving infinite dimensional quadratic stochastic operators

    International Nuclear Information System (INIS)

    Akın, Hasan; Mukhamedov, Farrukh

    2015-01-01

    In the present paper, we consider a notion of orthogonal preserving nonlinear operators. We introduce π-Volterra quadratic operators finite and infinite dimensional settings. It is proved that any orthogonal preserving quadratic operator on finite dimensional simplex is π-Volterra quadratic operator. In infinite dimensional setting, we describe all π-Volterra operators in terms orthogonal preserving operators

  15. Stochastic quantum gravity-(2+1)-dimensional case

    International Nuclear Information System (INIS)

    Hosoya, Akio

    1991-01-01

    At first the amazing coincidences are pointed out in quantum field theory in curved space-time and quantum gravity, when they exhibit stochasticity. To explore the origin of them, the (2+1)-dimensional quantum gravity is considered as a toy model. It is shown that the torus universe in the (2+1)-dimensional quantum gravity is a quantum chaos in a rigorous sense. (author). 15 refs

  16. Stochastic Analysis 2010

    CERN Document Server

    Crisan, Dan

    2011-01-01

    "Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa

  17. Perturbative QCD lagrangian at large distances and stochastic dimensionality reduction

    International Nuclear Information System (INIS)

    Shintani, M.

    1986-10-01

    We construct a Lagrangian for perturbative QCD at large distances within the covariant operator formalism which explains the color confinement of quarks and gluons while maintaining unitarity of the S-matrix. It is also shown that when interactions are switched off, the mechanism of stochastic dimensionality reduction is operative in the system due to exact super-Lorentz symmetries. (orig.)

  18. Stochastic confinement and dimensional reduction. 1

    International Nuclear Information System (INIS)

    Ambjoern, J.; Olesen, P.; Peterson, C.

    1984-03-01

    By Monte Carlo calculations on a 16 4 lattice the authors investigate four dimensional SU(2) lattice guage theory with respect to the conjecture that at large distances this theory reduces approximately to two dimensional SU(2) lattice gauge theory. Good numerical evidence is found for this conjecture. As a by-product the SU(2) string tension is also measured and good agreement is found with scaling. The 'adjoint string tension' is also found to have a reasonable scaling behaviour. (Auth.)

  19. Stochastic confinement and dimensional reduction. Pt. 1

    International Nuclear Information System (INIS)

    Ambjoern, J.; Olesen, P.; Peterson, C.

    1984-01-01

    By Monte Carlo calculations on a 12 4 lattice we investigate four-dimensional SU(2) lattice gauge theory with respect to the conjecture that at large distances this theory reduces approximately to two-dimensional SU(2) lattice gauge theory. We find good numerical evidence for this conjecture. As a by-product we also measure the SU(2) string tension and find reasonable agreement with scaling. The 'adjoint string tension' is also found to have a reasonable scaling behaviour. (orig.)

  20. Clustering high dimensional data

    DEFF Research Database (Denmark)

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  1. Stochastic aspects of two-dimensional vibration diagnostics

    International Nuclear Information System (INIS)

    Pazsit, I.; Antonopoulos-Domis, M.; Gloeckler, O.

    1985-01-01

    The aim of this paper is to investigate the stochastic features of two-dimensional lateral damped oscillations of PWR core internals, that are induced by random force components. It is also investigated how these vibrating components, or the forces giving rise to the vibrations could be diagnosed through the analysis of displacement or neutron noise signals. The approach pursued here is to select a realisation of the random force components, then the equations of the motion are integrated and the time history of displacement components is obtained. From here various statistical descriptors of the motion, such as trajectory pattern, spectra and PDF functions, etc. can be calculated. It was investigated how these statistical descriptors depend on the characteristics of the driving force for both stationary and non-stationary cases. A conclusion of possible diagnostical relevance is that, under certain circumstances, the PDF functions could be an indicator of whether a particular peak in the corresponding power spectra belongs to a resonance in system transfer or rather a resonance in the external driving force. (author)

  2. Stochastic aspects of two-dimensional vibration diagnostics

    International Nuclear Information System (INIS)

    Pazsit, I.; Antonopoulos-Domis, M.; Glockler, O.

    1984-01-01

    The aim of this paper is to investigate the stochastic features of two-dimensional lateral damped oscillations of PWR core internals that are induced by random force components. It is also investigated how these vibrating components, or the forces giving rise to the vibrations, could be diagnosed through the analysis of displacement or neutron noise signals. The approach pursued here is to select a realisation of the random force components, then the equations of the motion are integrated and the time history of displacement components is obtained. From here various statistical descriptors of the motion, such as trajectory pattern, spectra and PDF functions etc., can be calculated. It was investigated how these statistical descriptors depend on the characteristics of the driving force for both stationary and non-stationary cases. A conclusion of possible diagnostical relevance is that, under certain circumstances, the PDF functions could be an indicator of whether a particular peak in the corresponding power spectra belongs to a resonance in system transfer or rather a resonance in the external driving force

  3. Stochastic aspects of two-dimensional vibration diagnostics

    International Nuclear Information System (INIS)

    Pazsit, I.; Gloeckler, O.

    1984-01-01

    The aim of this paper is to investigate the stochastic features of two-dimensional lateral damped oscillations of PWR core internals that are induced by random force components. It is also investigated how these vibrating components, or the forces giving rise to the vibrations, could be diagnosed through the analysis of displacement or neutron noise signals. The approach pursued here is to select a realisation of the random force components, then the equations of the motion ar integrated and the time history of displacement components is obtained. From here various statistical descriptors of the motion, such as trajectory pattern, spectra and PDF functions etc., can be calculated. It was investigated how these statistical descriptors depend on the characteristics of the driving force for both stationary and non-stationary cases. A conclusion of possible diagnostical relevance is that, under certain circumstances, the PDF functions could be an indicator of whether a particular peak in the corresponding power spectra belongs to a resonance in system transfer or rather a resonance in the external driving force. (author)

  4. Infinite Dimensional Stochastic Analysis : in Honor of Hui-Hsiung Kuo

    CERN Document Server

    Sundar, Pushpa

    2008-01-01

    This volume contains current work at the frontiers of research in infinite dimensional stochastic analysis. It presents a carefully chosen collection of articles by experts to highlight the latest developments in white noise theory, infinite dimensional transforms, quantum probability, stochastic partial differential equations, and applications to mathematical finance. Included in this volume are expository papers which will help increase communication between researchers working in these areas. The tools and techniques presented here will be of great value to research mathematicians, graduate

  5. Perturbative QCD Lagrangian at large distances and stochastic dimensionality reduction. Pt. 2

    International Nuclear Information System (INIS)

    Shintani, M.

    1986-11-01

    Using the method of stochastic dimensional reduction, we derive a four-dimensional quantum effective Lagrangian for the classical Yang-Mills system coupled to the Gaussian white noise. It is found that the Lagrangian coincides with the perturbative QCD at large distances constructed in our previous paper. That formalism is based on the local covariant operator formalism which maintains the unitarity of the S-matrix. Furthermore, we show the non-perturbative equivalence between super-Lorentz invariant sectors of the effective Lagrangian and two dimensional QCD coupled to the adjoint pseudo-scalars. This implies that stochastic dimensionality reduction by two is approximately operative in QCD at large distances. (orig.)

  6. High-speed Stochastic Fatigue Testing

    DEFF Research Database (Denmark)

    Brincker, Rune; Sørensen, John Dalsgaard

    1990-01-01

    Good stochastic fatigue tests are difficult to perform. One of the major reasons is that ordinary servohydraulic loading systems realize the prescribed load history accurately at very low testing speeds only. If the speeds used for constant amplitude testing are applied to stochastic fatigue...

  7. High dimensional entanglement

    CSIR Research Space (South Africa)

    Mc

    2012-07-01

    Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...

  8. Extended Jacobi Elliptic Function Rational Expansion Method and Its Application to (2+1)-Dimensional Stochastic Dispersive Long Wave System

    International Nuclear Information System (INIS)

    Song Lina; Zhang Hongqing

    2007-01-01

    In this work, by means of a generalized method and symbolic computation, we extend the Jacobi elliptic function rational expansion method to uniformly construct a series of stochastic wave solutions for stochastic evolution equations. To illustrate the effectiveness of our method, we take the (2+1)-dimensional stochastic dispersive long wave system as an example. We not only have obtained some known solutions, but also have constructed some new rational formal stochastic Jacobi elliptic function solutions.

  9. Mining High-Dimensional Data

    Science.gov (United States)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  10. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  11. Spontaneous transition to a stochastic state in a four-dimensional Yang-Mills quantum theory

    International Nuclear Information System (INIS)

    Semikhatov, A.M.

    1983-01-01

    The quantum expectation values in a four-dimensional Yang-Mills theory are represented in each topological sector as expectation values over the diffusion which develops in the ''fourth'' Euclidean time. The Langevin equations of this diffusion are stochastic duality equations in the A 4 = 0 gauge

  12. Numerical resolution of N-dimensional Fokker-Plank stochastic equations

    International Nuclear Information System (INIS)

    Garcia-Olivares, A.; Muoz, A.

    1992-01-01

    This document describes the use of a library of programs able to solve stochastic Fokker-Planck equations in a N-dimensional space. the input data are essentially: (i) the initial distribution of the stochastic variable, (ii) the drift and fluctuation coefficients as a function of the state (which can be obtained from the transition probabilities between neighboring states) and (iii) some parameters controlling the run. A last version of the library accepts sources and sinks defined in the states space. The output is the temporal evolution of the probability distribution in the space defined by a N-dimensional grid. Some applications and readings in Synergetics, Self-Organization, transport phenomena, Ecology and other fields are suggested. If the probability distribution is interpreted as a distribution of particles then the codes can be used to solve the N-dimensional problem of advection-diffusion. (author) 21 fig. 16 ref

  13. Numerical Resolution of N-dimensional Fokker-Planck stochastic equations

    International Nuclear Information System (INIS)

    Garcia-Olivares, R. A.; Munoz Roldan, A.

    1992-01-01

    This document describes the use of a library of programs able to solve stochastic Fokker-Planck equations in a N-dimensional space. The input data are essentially: (i) the initial distribution of the stochastic variable, (ii) the drift and fluctuation coefficients as a function of the state (which can be obtained from the transition probabilities between neighboring states) and (iii) some parameters controlling the run. A last version of the library accepts sources and sinks defined in the states space. The output is the temporal evolution of the probability distribution in the space defined by a N-dimensional grid. Some applications and readings in Synergetic, Self-Organization, transport phenomena, Ecology and other fields are suggested. If the probability distribution is interpreted as a distribution of particles then the codes can be used to solve the N-dimensional problem of advection-diffusion. (Author) 16 refs

  14. Stochastic and collisional diffusion in two-dimensional periodic flows

    International Nuclear Information System (INIS)

    Doxas, I.; Horton, W.; Berk, H.L.

    1990-05-01

    The global effective diffusion coefficient D* for a two-dimensional system of convective rolls with a time dependent perturbation added, is calculated. The perturbation produces a background diffusion coefficient D, which is calculated analytically using the Menlikov-Arnold integral. This intrinsic diffusion coefficient is then enhanced by the unperturbed flow, to produce the global effective diffusion coefficient D*, which we can calculate theoretically for a certain range of parameters. The theoretical value agrees well with numerical simulations. 23 refs., 4 figs

  15. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  16. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  17. Digital hardware implementation of a stochastic two-dimensional neuron model.

    Science.gov (United States)

    Grassia, F; Kohno, T; Levi, T

    2016-11-01

    This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Gray and multigroup radiation transport models for two-dimensional binary stochastic media using effective opacities

    International Nuclear Information System (INIS)

    Olson, Gordon L.

    2016-01-01

    One-dimensional models for the transport of radiation through binary stochastic media do not work in multi-dimensions. Authors have attempted to modify or extend the 1D models to work in multidimensions without success. Analytic one-dimensional models are successful in 1D only when assuming greatly simplified physics. State of the art theories for stochastic media radiation transport do not address multi-dimensions and temperature-dependent physics coefficients. Here, the concept of effective opacities and effective heat capacities is found to well represent the ensemble averaged transport solutions in cases with gray or multigroup temperature-dependent opacities and constant or temperature-dependent heat capacities. In every case analyzed here, effective physics coefficients fit the transport solutions over a useful range of parameter space. The transport equation is solved with the spherical harmonics method with angle orders of n=1 and 5. Although the details depend on what order of solution is used, the general results are similar, independent of angular order. - Highlights: • Gray and multigroup radiation transport is done through 2D stochastic media. • Approximate models for the mean radiation field are found for all test problems. • Effective opacities are adjusted to fit the means of stochastic media transport. • Test problems include temperature dependent opacities and heat capacities • Transport solutions are done with angle orders n=1 and 5.

  19. Analytic solution of the two-dimensional Fokker-Planck equation governing stochastic ion heating by a lower hybrid wave

    International Nuclear Information System (INIS)

    Malescio, G.

    1981-04-01

    The two-dimensional Fokker-Planck equation describing the ion motion in a coherent lower hybrid wave above the stochasticity threshold is analytically solved. An expression is given for the steady state power dissipation

  20. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  1. Stochastic volatility and multi-dimensional modeling in the European energy market

    Energy Technology Data Exchange (ETDEWEB)

    Vos, Linda

    2012-07-01

    In energy prices there is evidence for stochastic volatility. Stochastic volatility has effect on the price of path-dependent options and therefore has to be modeled properly. We introduced a multi-dimensional non-Gaussian stochastic volatility model with leverage which can be used in energy pricing. It captures special features of energy prices like price spikes, mean-reversion, stochastic volatility and inverse leverage. Moreover it allows modeling dependencies between different commodities.The derived forward price dynamics based on this multi-variate spot price model, provides a very flexible structure. It includes cotango, backwardation and hump shape forward curves.Alternatively energy prices could be modeled by a 2-factor model consisting of a non-Gaussian stable CARMA process and a non-stationary trend models by a Levy process. Also this model is able to capture special features like price spikes, mean reversion and the low frequency dynamics in the market. An robust L1-filter is introduced to filter out the states of the CARMA process. When applying to German electricity EEX exchange data an overall negative risk-premium is found. However close to delivery a positive risk-premium is observed.(Author)

  2. Stochastic Approaches Within a High Resolution Rapid Refresh Ensemble

    Science.gov (United States)

    Jankov, I.

    2017-12-01

    It is well known that global and regional numerical weather prediction (NWP) ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system is the use of stochastic physics. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), and Stochastic Perturbation of Physics Tendencies (SPPT). The focus of this study is to assess model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) using a variety of stochastic approaches. A single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model was utilized and ensemble members produced by employing stochastic methods. Parameter perturbations (using SPP) for select fields were employed in the Rapid Update Cycle (RUC) land surface model (LSM) and Mellor-Yamada-Nakanishi-Niino (MYNN) Planetary Boundary Layer (PBL) schemes. Within MYNN, SPP was applied to sub-grid cloud fraction, mixing length, roughness length, mass fluxes and Prandtl number. In the RUC LSM, SPP was applied to hydraulic conductivity and tested perturbing soil moisture at initial time. First iterative testing was conducted to assess the initial performance of several configuration settings (e.g. variety of spatial and temporal de-correlation lengths). Upon selection of the most promising candidate configurations using SPP, a 10-day time period was run and more robust statistics were gathered. SKEB and SPPT were included in additional retrospective tests to assess the impact of using

  3. Recognition of Equations Using a Two-Dimensional Stochastic Context-Free Grammar

    Science.gov (United States)

    Chou, Philip A.

    1989-11-01

    We propose using two-dimensional stochastic context-free grammars for image recognition, in a manner analogous to using hidden Markov models for speech recognition. The value of the approach is demonstrated in a system that recognizes printed, noisy equations. The system uses a two-dimensional probabilistic version of the Cocke-Younger-Kasami parsing algorithm to find the most likely parse of the observed image, and then traverses the corresponding parse tree in accordance with translation formats associated with each production rule, to produce eqn I troff commands for the imaged equation. In addition, it uses two-dimensional versions of the Inside/Outside and Baum re-estimation algorithms for learning the parameters of the grammar from a training set of examples. Parsing the image of a simple noisy equation currently takes about one second of cpu time on an Alliant FX/80.

  4. Three dimensional nuclear magnetic resonance spectroscopic imaging of sodium ions using stochastic excitation and oscillating gradients

    International Nuclear Information System (INIS)

    Frederick, B.deB.

    1994-12-01

    Nuclear magnetic resonance (NMR) spectroscopic imaging of 23 Na holds promise as a non-invasive method of mapping Na + distributions, and for differentiating pools of Na + ions in biological tissues. However, due to NMR relaxation properties of 23 Na in vivo, a large fraction of Na + is not visible with conventional NMR imaging methods. An alternate imaging method, based on stochastic excitation and oscillating gradients, has been developed which is well adapted to measuring nuclei with short T 2 . Contemporary NMR imaging techniques have dead times of up to several hundred microseconds between excitation and sampling, comparable to the shortest in vivo 23 Na T 2 values, causing significant signal loss. An imaging strategy based on stochastic excitation has been developed which greatly reduces experiment dead time by reducing peak radiofrequency (RF) excitation power and using a novel RF circuit to speed probe recovery. Continuously oscillating gradients are used to eliminate transient eddy currents. Stochastic 1 H and 23 Na spectroscopic imaging experiments have been performed on a small animal system with dead times as low as 25μs, permitting spectroscopic imaging with 100% visibility in vivo. As an additional benefit, the encoding time for a 32x32x32 spectroscopic image is under 30 seconds. The development and analysis of stochastic NMR imaging has been hampered by limitations of the existing phase demodulation reconstruction technique. Three dimensional imaging was impractical due to reconstruction time, and design and analysis of proposed experiments was limited by the mathematical intractability of the reconstruction method. A new reconstruction method for stochastic NMR based on Fourier interpolation has been formulated combining the advantage of a several hundredfold reduction in reconstruction time with a straightforward mathematical form

  5. Stochastic evolutions and hadronization of highly excited hadronic matter

    International Nuclear Information System (INIS)

    Carruthers, P.

    1984-01-01

    Stochastic ingredients of high energy hadronic collisions are analyzed, with emphasis on multiplicity distributions. The conceptual simplicity of the k-cell negative binomial distribution is related to the evolution of probability distributions via the Fokker-Planck and related equations. The connection to underlying field theory ideas is sketched. 17 references

  6. Stochastic models of solute transport in highly heterogeneous geologic media

    Energy Technology Data Exchange (ETDEWEB)

    Semenov, V.N.; Korotkin, I.A.; Pruess, K.; Goloviznin, V.M.; Sorokovikova, O.S.

    2009-09-15

    A stochastic model of anomalous diffusion was developed in which transport occurs by random motion of Brownian particles, described by distribution functions of random displacements with heavy (power-law) tails. One variant of an effective algorithm for random function generation with a power-law asymptotic and arbitrary factor of asymmetry is proposed that is based on the Gnedenko-Levy limit theorem and makes it possible to reproduce all known Levy {alpha}-stable fractal processes. A two-dimensional stochastic random walk algorithm has been developed that approximates anomalous diffusion with streamline-dependent and space-dependent parameters. The motivation for introducing such a type of dispersion model is the observed fact that tracers in natural aquifers spread at different super-Fickian rates in different directions. For this and other important cases, stochastic random walk models are the only known way to solve the so-called multiscaling fractional order diffusion equation with space-dependent parameters. Some comparisons of model results and field experiments are presented.

  7. Calculation of three-dimensional MHD equilibria with islands and stochastic regions

    International Nuclear Information System (INIS)

    Reiman, A.; Greenside, H.

    1986-08-01

    A three-dimensional MHD equilibrium code is described that does not assume the existence of good surfaces. Given an initial guess for the magnetic field, the code proceeds by calculating the pressure-driven current and then by updating the field using Ampere's law. The numerical algorithm to solve the magnetic differential equation for the pressure-driven current is described, and demonstrated for model fields having islands and stochastic regions. The numerical algorithm which solves Ampere's law in three dimensions is also described. Finally, the convergence of the code is illustrated for a particular stellarator equilibrium with no large islands

  8. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    Energy Technology Data Exchange (ETDEWEB)

    Samin, Adib J. [The Department of Mechanical and Aerospace Engineering, The Ohio State University, 201 W 19" t" h Avenue, Columbus, Ohio 43210 (United States)

    2016-05-15

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  9. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    International Nuclear Information System (INIS)

    Samin, Adib J.

    2016-01-01

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  10. Collective, stochastic and nonequilibrium behavior of highly excited hadronic matter

    Energy Technology Data Exchange (ETDEWEB)

    Carruthers, P [Los Alamos National Lab., NM (USA). Theoretical Div.

    1984-04-23

    We discuss selected problems concerning the dynamics and stochastic behavior of highly excited matter, particularly the QCD plasma. For the latter we consider the equation of state, kinetics, quasiparticles, flow properties and possible chaos and turbulence. The promise of phase space distribution functions for covariant transport and kinetic theory is stressed. The possibility and implications of a stochastic bag are spelled out. A simplified space-time model of hadronic collisions is pursued, with applications to A-A collisions and other matters. The domain wall between hadronic and plasma phase is of potential importance: its thickness and relation to surface tension is noticed. Finally, we review the recently developed stochastic cell model of multiparticle distributions and KNO scaling. This topic leads to the notion that fractional dimensions are involved in a rather general dynamical context. We speculate that various scaling phenomena are independent of the full dynamical structure, depending only on a general stochastic framework having to do with simple maps and strange attractors. 42 refs.

  11. Stochastic self-propagating star formation in three-dimensional disk galaxy simulations

    International Nuclear Information System (INIS)

    Statler, T.; Comins, N.; Smith, B.F.

    1983-01-01

    Stochastic self-propagating star formation (SSPSF) is a process of forming new stars through the compression of the interstellar medium by supernova shock waves. Coupling this activity with galactic differential rotation produces spiral structure in two-dimensional disk galaxy simulations. In this paper the first results of a three-dimensional SSPSF simulation of disk galaxies are reported. Our model generates less impressive spirals than do the two-dimensional simulations. Although some spirals do appear in equilibrium, more frequently we observe spirals as non-equilibrium states of the models: as the spiral arms evolve, they widen until the spiral structure is no longer discernible. The two free parameters that we vary in this study are the probability of star formation due to a recent, nearby explosion, and the relaxation time for the interstellar medium to return to a condition of maximum star formation after it has been cleared out by an explosion and subsequent star formation. We find that equilibrium spiral structure is formed over a much smaller range of these parameters in our three-dimensional SSPSF models than in similar two-dimensional models. We discuss possible reasons for these results as well as improvements on the model which are being explored

  12. Galactic Cosmic-ray Transport in the Global Heliosphere: A Four-Dimensional Stochastic Model

    Science.gov (United States)

    Florinski, V.

    2009-04-01

    We study galactic cosmic-ray transport in the outer heliosphere and heliosheath using a newly developed transport model based on stochastic integration of the phase-space trajectories of Parker's equation. The model employs backward integration of the diffusion-convection transport equation using Ito calculus and is four-dimensional in space+momentum. We apply the model to the problem of galactic proton transport in the heliosphere during a negative solar minimum. Model results are compared with the Voyager measurements of galactic proton radial gradients and spectra in the heliosheath. We show that the heliosheath is not as efficient in diverting cosmic rays during solar minima as predicted by earlier two-dimensional models.

  13. Enhanced three-dimensional stochastic adjustment for combined volcano geodetic networks

    Science.gov (United States)

    Del Potro, R.; Muller, C.

    2009-12-01

    Volcano geodesy is unquestionably a necessary technique in studies of physical volcanology and for eruption early warning systems. However, as every volcano geodesist knows, obtaining measurements of the required resolution using traditional campaigns and techniques is time consuming and requires a large manpower. Moreover, most volcano geodetic networks worldwide use a combination of data from traditional techniques; levelling, electronic distance measurements (EDM), triangulation and Global Navigation Satellite Systems (GNSS) but, in most cases, these data are surveyed, analysed and adjusted independently. This then leaves it to the authors’ criteria to decide which technique renders the most realistic results in each case. Herein we present a way of solving the problem of inter-methodology data integration in a cost-effective manner following a methodology were all the geodetic data of a redundant, combined network (e.g. surveyed by GNSS, levelling, distance, angular data, INSAR, extensometers, etc.) is adjusted stochastically within a single three-dimensional referential frame. The adjustment methodology is based on the least mean square method and links the data with its geometrical component providing combined, precise, three-dimensional, displacement vectors, relative to external reference points as well as stochastically-quantified, benchmark-specific, uncertainty ellipsoids. Three steps in the adjustment allow identifying, and hence dismissing, flagrant measurement errors (antenna height, atmospheric effects, etc.), checking the consistency of external reference points and a final adjustment of the data. Moreover, since the statistical indicators can be obtained from expected uncertainties in the measurements of the different geodetic techniques used (i.e. independent of the measured data), it is possible to run a priori simulations of a geodetic network in order to constrain its resolution, and reduce logistics, before the network is even built. In this

  14. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    Directory of Open Access Journals (Sweden)

    Gianluca Calcagni

    2017-10-01

    Full Text Available We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  15. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    International Nuclear Information System (INIS)

    Calcagni, Gianluca; Ronco, Michele

    2017-01-01

    We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  16. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    Science.gov (United States)

    Calcagni, Gianluca; Ronco, Michele

    2017-10-01

    We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  17. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  18. Collective, stochastic and nonequilibrium behavior of highly excited hadronic matter

    International Nuclear Information System (INIS)

    Carruthers, P.

    1983-01-01

    We discuss selected problems concerning the dynamic and stochasticc behavior of highly excited matter, particularly the QCD plasma. For the latter we consider the equation of state, kinetics, quasiparticles, flow properties and possible chaos and turbulence. The promise of phase space distribution functions for covariant transport and kinetic theory is stressed. The possibility and implications of a stochastic bag are spelled out. A simplified space-time model of hadronic collisions is pursued, with applications to A-A collisions and other matters. The domain wall between hadronic and plasma phase is of potential importance: its thickness and relation to surface tension are noticed. Finally we reviewed the recently developed stochastic cell model of multiparticle distributions and KNO scaling. This topic leads to the notion that fractal dimensions are involved in a rather general dynamical context. We speculate that various scaling phenomena are independent of the full dynamical structure, depending only on a general stochastic framework having to do with simple maps and strange attractors. 42 references

  19. Passive tracer in a flow corresponding to two-dimensional stochastic Navier–Stokes equations

    International Nuclear Information System (INIS)

    Komorowski, Tomasz; Peszat, Szymon; Szarek, Tomasz

    2013-01-01

    In this paper we prove the law of large numbers and central limit theorem for trajectories of a particle carried by a two-dimensional Eulerian velocity field. The field is given by a solution of a stochastic Navier–Stokes system with non-degenerate noise. The spectral gap property, with respect to the Wasserstein metric, for such a system was shown in Hairer and Mattingly (2008 Ann. Probab. 36 2050–91). In this paper we show that a similar property holds for the environment process corresponding to the Lagrangian observations of the velocity. Consequently we conclude the law of large numbers and the central limit theorem for the tracer. The proof of the central limit theorem relies on the martingale approximation of the trajectory process. (paper)

  20. Adaptive stochastic Galerkin FEM with hierarchical tensor representations

    KAUST Repository

    Eigel, Martin

    2016-01-01

    PDE with stochastic data usually lead to very high-dimensional algebraic problems which easily become unfeasible for numerical computations because of the dense coupling structure of the discretised stochastic operator. Recently, an adaptive

  1. Linear stability theory as an early warning sign for transitions in high dimensional complex systems

    International Nuclear Information System (INIS)

    Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft

    2016-01-01

    We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)

  2. Numerical Resolution of N-dimensional Fokker-Planck stochastic equations; Resolucion Numerica de Ecuaciones Estocasticas de tipo Fokker-Planck en Varias Dimensiones

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Olivares, R A; Munoz Roldan, A

    1992-07-01

    This document describes the use of a library of programs able to solve stochastic Fokker-Planck equations in a N-dimensional space. The input data are essentially: (i) the initial distribution of the stochastic variable, (ii) the drift and fluctuation coefficients as a function of the state (which can be obtained from the transition probabilities between neighboring states) and (iii) some parameters controlling the run. A last version of the library accepts sources and sinks defined in the states space. The output is the temporal evolution of the probability distribution in the space defined by a N-dimensional grid. Some applications and readings in Synergetic, Self-Organization, transport phenomena, Ecology and other fields are suggested. If the probability distribution is interpreted as a distribution of particles then the codes can be used to solve the N-dimensional problem of advection-diffusion. (Author) 16 refs.

  3. Quantitative study of quasi-one-dimensional Bose gas experiments via the stochastic Gross-Pitaevskii equation

    International Nuclear Information System (INIS)

    Cockburn, S. P.; Gallucci, D.; Proukakis, N. P.

    2011-01-01

    The stochastic Gross-Pitaevskii equation is shown to be an excellent model for quasi-one-dimensional Bose gas experiments, accurately reproducing the in situ density profiles recently obtained in the experiments of Trebbia et al.[Phys. Rev. Lett. 97, 250403 (2006)] and van Amerongen et al.[Phys. Rev. Lett. 100, 090402 (2008)] and the density fluctuation data reported by Armijo et al.[Phys. Rev. Lett. 105, 230402 (2010)]. To facilitate such agreement, we propose and implement a quasi-one-dimensional extension to the one-dimensional stochastic Gross-Pitaevskii equation for the low-energy, axial modes, while atoms in excited transverse modes are treated as independent ideal Bose gases.

  4. High-Dimensional Metrics in R

    OpenAIRE

    Chernozhukov, Victor; Hansen, Chris; Spindler, Martin

    2016-01-01

    The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...

  5. Compton harmonic resonances, stochastic instabilities, quasilinear diffusion, and collisionless damping with ultra-high intensity laser waves

    International Nuclear Information System (INIS)

    Rax, J.M.

    1992-04-01

    The dynamics of electrons in two-dimensional, linearly or circularly polarized, ultra-high intensity (above 10 18 W/cm 2 ) laser waves, is investigated. The Compton harmonic resonances are identified as the source of various stochastic instabilities. Both Arnold diffusion and resonance overlap are considered. The quasilinear kinetic equation, describing the evolution of the electron distribution function, is derived, and the associated collisionless damping coefficient is calculated. The implications of these new processes are considered and discussed

  6. Backward Stochastic Riccati Equations and Infinite Horizon L-Q Optimal Control with Infinite Dimensional State Space and Random Coefficients

    International Nuclear Information System (INIS)

    Guatteri, Giuseppina; Tessitore, Gianmario

    2008-01-01

    We study the Riccati equation arising in a class of quadratic optimal control problems with infinite dimensional stochastic differential state equation and infinite horizon cost functional. We allow the coefficients, both in the state equation and in the cost, to be random.In such a context backward stochastic Riccati equations are backward stochastic differential equations in the whole positive real axis that involve quadratic non-linearities and take values in a non-Hilbertian space. We prove existence of a minimal non-negative solution and, under additional assumptions, its uniqueness. We show that such a solution allows to perform the synthesis of the optimal control and investigate its attractivity properties. Finally the case where the coefficients are stationary is addressed and an example concerning a controlled wave equation in random media is proposed

  7. Analysis of distances between inclusions in finite one-dimensional binary stochastic materials

    International Nuclear Information System (INIS)

    Griesheimer, D. P.; Millman, D. L.

    2009-01-01

    In this paper we develop a statistical distribution for the number of inclusions present in a one-dimensional binary stochastic material of a finite length. From this distribution, an analytic solution for the expected number of inclusions present in a given problem is derived. For cases where the analytical solution for the expected number of inclusions is prohibitively expensive to compute, a simple, empirically-derived, approximation for the expected value is presented. A series of numerical experiments are used to bound the error of this approximation over the domain of interest. Finally, the above approximations are used to develop a methodology for determining the distribution of distances between adjacent inclusions in the material, subject to known problem conditions including: the total length of the problem, the length of each inclusion, and the expected volume fraction of inclusions in the problem. The new method is shown to be equivalent to the use of the infinite medium nearest neighbor distribution with an effective volume fraction to account for the finite nature of the material. Numerical results are presented for a wide range of problem parameters, in order to demonstrate the accuracy of this method and identify conditions where the method breaks down. In general, the technique is observed to produce excellent results (absolute error less than 1 10-6) for problems with inclusion volume fractions less than 0.8 and a ratio of problem length to inclusion length greater than 25. For problems that do not fall into this category, the accuracy of the method is shown to be dependent on the particular combination of these parameters. A brief discussion of the relevance of this method for Monte Carlo chord length sampling algorithms is also provided. (authors)

  8. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    International Nuclear Information System (INIS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-01-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  9. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Science.gov (United States)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  10. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  11. Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output

    Energy Technology Data Exchange (ETDEWEB)

    Melin, Alexander M. [ORNL; Olama, Mohammed M. [ORNL; Dong, Jin [ORNL; Djouadi, Seddik M. [ORNL; Zhang, Yichen [University of Tennessee, Knoxville (UTK), Department of Electrical Engineering and Computer Science

    2017-09-01

    The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed to estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.

  12. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  13. Time evolution of one-dimensional gapless models from a domain wall initial state: stochastic Loewner evolution continued?

    International Nuclear Information System (INIS)

    Calabrese, Pasquale; Hagendorf, Christian; Doussal, Pierre Le

    2008-01-01

    We study the time evolution of quantum one-dimensional gapless systems evolving from initial states with a domain wall. We generalize the path integral imaginary time approach that together with boundary conformal field theory allows us to derive the time and space dependence of general correlation functions. The latter are explicitly obtained for the Ising universality class, and the typical behavior of one- and two-point functions is derived for the general case. Possible connections with the stochastic Loewner evolution are discussed and explicit results for one-point time dependent averages are obtained for generic κ for boundary conditions corresponding to stochastic Loewner evolution. We use this set of results to predict the time evolution of the entanglement entropy and obtain the universal constant shift due to the presence of a domain wall in the initial state

  14. One-and two-dimensional topological charge distributions in stochastic optical fields

    CSIR Research Space (South Africa)

    Roux, FS

    2011-06-01

    Full Text Available The presentation on topological charge distributions in stochastic optical fields concludes that by using a combination of speckle fields one can produce inhomogeneous vortex distributions that allow both analytical calculations and numerical...

  15. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    International Nuclear Information System (INIS)

    Zhai, Jianliang; Zhang, Tusheng

    2017-01-01

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  16. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  17. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando

    2017-01-01

    aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel

  18. Exact Finite-Difference Schemes for d-Dimensional Linear Stochastic Systems with Constant Coefficients

    Directory of Open Access Journals (Sweden)

    Peng Jiang

    2013-01-01

    Full Text Available The authors attempt to construct the exact finite-difference schemes for linear stochastic differential equations with constant coefficients. The explicit solutions to Itô and Stratonovich linear stochastic differential equations with constant coefficients are adopted with the view of providing exact finite-difference schemes to solve them. In particular, the authors utilize the exact finite-difference schemes of Stratonovich type linear stochastic differential equations to solve the Kubo oscillator that is widely used in physics. Further, the authors prove that the exact finite-difference schemes can preserve the symplectic structure and first integral of the Kubo oscillator. The authors also use numerical examples to prove the validity of the numerical methods proposed in this paper.

  19. Stochastic resonance induced by the novel random transitions of two-dimensional weak damping bistable duffing oscillator and bifurcation of moment equation

    International Nuclear Information System (INIS)

    Zhang Guangjun; Xu Jianxue; Wang Jue; Yue Zhifeng; Zou Hailin

    2009-01-01

    In this paper stochastic resonance induced by the novel random transitions of two-dimensional weak damping bistable Duffing oscillator is analyzed by moment method. This kind of novel transition refers to the one among three potential well on two sides of bifurcation point of original system at the presence of internal noise. Several conclusions are drawn. First, the semi-analytical result of stochastic resonance induced by the novel random transitions of two-dimensional weak damping bistable Duffing oscillator can be obtained, and the semi-analytical result is qualitatively compatible with the one of Monte Carlo simulation. Second, a bifurcation of double-branch fixed point curves occurs in the moment equations with noise intensity as their bifurcation parameter. Third, the bifurcation of moment equations corresponds to stochastic resonance of original system. Finally, the mechanism of stochastic resonance is presented from another viewpoint through analyzing the energy transfer induced by the bifurcation of moment equation.

  20. High dimensional neurocomputing growth, appraisal and applications

    CERN Document Server

    Tripathi, Bipin Kumar

    2015-01-01

    The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...

  1. A Sparse Stochastic Collocation Technique for High-Frequency Wave Propagation with Uncertainty

    KAUST Repository

    Malenova, G.; Motamed, M.; Runborg, O.; Tempone, Raul

    2016-01-01

    We consider the wave equation with highly oscillatory initial data, where there is uncertainty in the wave speed, initial phase, and/or initial amplitude. To estimate quantities of interest related to the solution and their statistics, we combine a high-frequency method based on Gaussian beams with sparse stochastic collocation. Although the wave solution, uϵ, is highly oscillatory in both physical and stochastic spaces, we provide theoretical arguments for simplified problems and numerical evidence that quantities of interest based on local averages of |uϵ|2 are smooth, with derivatives in the stochastic space uniformly bounded in ϵ, where ϵ denotes the short wavelength. This observable related regularity makes the sparse stochastic collocation approach more efficient than Monte Carlo methods. We present numerical tests that demonstrate this advantage.

  2. A Sparse Stochastic Collocation Technique for High-Frequency Wave Propagation with Uncertainty

    KAUST Repository

    Malenova, G.

    2016-09-08

    We consider the wave equation with highly oscillatory initial data, where there is uncertainty in the wave speed, initial phase, and/or initial amplitude. To estimate quantities of interest related to the solution and their statistics, we combine a high-frequency method based on Gaussian beams with sparse stochastic collocation. Although the wave solution, uϵ, is highly oscillatory in both physical and stochastic spaces, we provide theoretical arguments for simplified problems and numerical evidence that quantities of interest based on local averages of |uϵ|2 are smooth, with derivatives in the stochastic space uniformly bounded in ϵ, where ϵ denotes the short wavelength. This observable related regularity makes the sparse stochastic collocation approach more efficient than Monte Carlo methods. We present numerical tests that demonstrate this advantage.

  3. High Weak Order Methods for Stochastic Differential Equations Based on Modified Equations

    KAUST Repository

    Abdulle, Assyr

    2012-01-01

    © 2012 Society for Industrial and Applied Mathematics. Inspired by recent advances in the theory of modified differential equations, we propose a new methodology for constructing numerical integrators with high weak order for the time integration of stochastic differential equations. This approach is illustrated with the constructions of new methods of weak order two, in particular, semi-implicit integrators well suited for stiff (meansquare stable) stochastic problems, and implicit integrators that exactly conserve all quadratic first integrals of a stochastic dynamical system. Numerical examples confirm the theoretical results and show the versatility of our methodology.

  4. Three-dimensional stochastic adjustment of volcano geodetic network in Arenal volcano, Costa Rica

    Science.gov (United States)

    Muller, C.; van der Laat, R.; Cattin, P.-H.; Del Potro, R.

    2009-04-01

    Volcano geodetic networks are a key instrument to understanding magmatic processes and, thus, forecasting potentially hazardous activity. These networks are extensively used on volcanoes worldwide and generally comprise a number of different traditional and modern geodetic surveying techniques such as levelling, distances, triangulation and GNSS. However, in most cases, data from the different methodologies are surveyed, adjusted and analysed independently. Experience shows that the problem with this procedure is the mismatch between the excellent correlation of position values within a single technique and the low cross-correlation of such values within different techniques or when the same network is surveyed shortly after using the same technique. Moreover one different independent network for each geodetic surveying technique strongly increase logistics and thus the cost of each measurement campaign. It is therefore important to develop geodetic networks which combine the different geodetic surveying technique, and to adjust geodetic data together in order to better quantify the uncertainties associated to the measured displacements. In order to overcome the lack of inter-methodology data integration, the Geomatic Institute of the University of Applied Sciences of Western Switzerland (HEIG-VD) has developed a methodology which uses a 3D stochastic adjustment software of redundant geodetic networks, TRINET+. The methodology consists of using each geodetic measurement technique for its strengths relative to other methodologies. Also, the combination of the measurements in a single network allows more cost-effective surveying. The geodetic data are thereafter adjusted and analysed in the same referential frame. The adjustment methodology is based on the least mean square method and links the data with the geometry. Trinet+ also allows to run a priori simulations of the network, hence testing the quality and resolution to be expected for a determined network even

  5. High-Resolution Replication Profiles Define the Stochastic Nature of Genome Replication Initiation and Termination

    Directory of Open Access Journals (Sweden)

    Michelle Hawkins

    2013-11-01

    Full Text Available Eukaryotic genome replication is stochastic, and each cell uses a different cohort of replication origins. We demonstrate that interpreting high-resolution Saccharomyces cerevisiae genome replication data with a mathematical model allows quantification of the stochastic nature of genome replication, including the efficiency of each origin and the distribution of termination events. Single-cell measurements support the inferred values for stochastic origin activation time. A strain, in which three origins were inactivated, confirmed that the distribution of termination events is primarily dictated by the stochastic activation time of origins. Cell-to-cell variability in origin activity ensures that termination events are widely distributed across virtually the whole genome. We propose that the heterogeneity in origin usage contributes to genome stability by limiting potentially deleterious events from accumulating at particular loci.

  6. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  7. Impact of spherical inclusion mean chord length and radius distribution on three-dimensional binary stochastic medium particle transport

    International Nuclear Information System (INIS)

    Brantley, Patrick S.; Martos, Jenny N.

    2011-01-01

    We describe a parallel benchmark procedure and numerical results for a three-dimensional binary stochastic medium particle transport benchmark problem. The binary stochastic medium is composed of optically thick spherical inclusions distributed in an optically thin background matrix material. We investigate three sphere mean chord lengths, three distributions for the sphere radii (constant, uniform, and exponential), and six sphere volume fractions ranging from 0.05 to 0.3. For each sampled independent material realization, we solve the associated transport problem using the Mercury Monte Carlo particle transport code. We compare the ensemble-averaged benchmark fiducial tallies of reflection from and transmission through the spatial domain as well as absorption in the spherical inclusion and background matrix materials. For the parameter values investigated, we find a significant dependence of the ensemble-averaged fiducial tallies on both sphere mean chord length and sphere volume fraction, with the most dramatic variation occurring for the transmission through the spatial domain. We find a weaker dependence of most benchmark tally quantities on the distribution describing the sphere radii, provided the sphere mean chord length used is the same in the different distributions. The exponential distribution produces larger differences from the constant distribution than the uniform distribution produces. The transmission through the spatial domain does exhibit a significant variation when an exponential radius distribution is used. (author)

  8. Stochastic Simulation of Chloride Ingress into Reinforced Concrete Structures by Means of Multi-Dimensional Gaussian Random Fields

    DEFF Research Database (Denmark)

    Frier, Christian; Sørensen, John Dalsgaard

    2005-01-01

    For many reinforced concrete structures corrosion of the reinforcement is an important problem since it can result in expensive maintenance and repair actions. Further, a significant reduction of the load-bearing capacity can occur. One mode of corrosion initiation occurs when the chloride content...... is modeled by a 2-dimensional diffusion process by FEM (Finite Element Method) and the diffusion coefficient, surface chloride concentration and reinforcement cover depth are modeled by multidimensional stochastic fields, which are discretized using the EOLE (Expansion Optimum Linear Estimation) approach....... As an example a bridge pier in a marine environment is considered and the results are given in terms of the distribution of the time for initialization of corrosion...

  9. Exact pairing correlations in one-dimensional trapped fermions with stochastic mean-field wave-functions

    Energy Technology Data Exchange (ETDEWEB)

    Juillet, O.; Gulminelli, F. [Caen Univ., Lab. de Physique Corpusculaire (LPC/ENSICAEN), 14 (France); Chomaz, Ph. [Grand Accelerateur National d' Ions Lourds (GANIL), 14 - Caen (France)

    2003-11-01

    The canonical thermodynamic properties of a one-dimensional system of interacting spin-1/2 fermions with an attractive zero-range pseudo-potential are investigated within an exact approach. The density operator is evaluated as the statistical average of dyadics formed from a stochastic mean-field propagation of independent Slater determinants. For an harmonically trapped Fermi gas and for fermions confined in a 1D-like torus, we observe the transition to a quasi-BCS state with Cooper-like momentum correlations and an algebraic long-range order. For few trapped fermions in a rotating torus, a dominant superfluid component with quantized circulation can be isolated. (author)

  10. Clustering high dimensional data using RIA

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.

  11. High-energy hadron dynamics based on a stochastic-field multieikonal theory

    International Nuclear Information System (INIS)

    Arnold, R.C.

    1977-01-01

    Multieikonal theory, using a stochastic-field representation for collective long-range rapidity correlations, is developed and applied to the calculation of Regge-pole parameters, high-transverse-momentum enhancements, and fluctuation patterns in rapidity densities. If a short-range-order model, such as the one-dimensional planar bootstrap, with only leading t-channel meson poles, is utilized as input to the multieikonal method, the pole spectrum is modified in three ways: promotion and renormalization of leading trajectories (suggesting an effective Pomeron above unity at intermediate energies), and a proliferation of dynamical secondary trajectories, reminiscent of dual models. When transverse dimensions are included, the collective effects produce a growth with energy of large-P/sub T/ inclusive cross sections. Typical-event rapidity distributions, at energies of a few TeV, can be estimated by suitable approximations; the fluctuations give rise to ''domain'' patterns, which have the appearance of clusters separated by rapidity gaps. The relations between this approach to strong-interaction dynamics and a possible unification of weak, electromagnetic, and strong interactions are outlined

  12. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  13. Highly conducting one-dimensional solids

    CERN Document Server

    Evrard, Roger; Doren, Victor

    1979-01-01

    Although the problem of a metal in one dimension has long been known to solid-state physicists, it was not until the synthesis of real one-dimensional or quasi-one-dimensional systems that this subject began to attract considerable attention. This has been due in part to the search for high­ temperature superconductivity and the possibility of reaching this goal with quasi-one-dimensional substances. A period of intense activity began in 1973 with the report of a measurement of an apparently divergent conduc­ tivity peak in TfF-TCNQ. Since then a great deal has been learned about quasi-one-dimensional conductors. The emphasis now has shifted from trying to find materials of very high conductivity to the many interesting problems of physics and chemistry involved. But many questions remain open and are still under active investigation. This book gives a review of the experimental as well as theoretical progress made in this field over the last years. All the chapters have been written by scientists who have ...

  14. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    International Nuclear Information System (INIS)

    Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit

    2017-01-01

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.

  15. A one-dimensional analysis of real and complex turbulence and the Maxwell set for the stochastic Burgers equation

    International Nuclear Information System (INIS)

    Neate, A D; Truman, A

    2005-01-01

    The inviscid limit of the Burgers equation, with body forces white noise in time, is discussed in terms of the level surfaces of the minimizing Hamilton-Jacobi function and the classical mechanical caustic and their algebraic pre-images under the classical mechanical flow map. The problem is analysed in terms of a reduced (one-dimensional) action function using a circle of ideas due to Arnol'd, Cayley and Klein. We characterize those parts of the caustic which are singular, and give an explicit expression for the cusp density on caustics and level surfaces. By considering the double points of level surfaces we find an explicit formula for the Maxwell set in the two-dimensional polynomial case, and we extend this to higher dimensions using a double discriminant of the reduced action, solving a long-standing problem for Hamiltonian dynamical systems. When the pre-level surface touches the pre-caustic, the geometry (number of cusps) on the level surface changes infinitely rapidly causing 'real turbulence'. Using an idea of Klein, it is shown that the geometry (number of swallowtails) on the caustic also changes infinitely rapidly when the real part of the pre-caustic touches its complex counterpart, causing 'complex turbulence'. These are both inherently stochastic in nature, and we determine their intermittence in terms of the recurrent behaviour of two processes

  16. Stochastic tools in turbulence

    CERN Document Server

    Lumey, John L

    2012-01-01

    Stochastic Tools in Turbulence discusses the available mathematical tools to describe stochastic vector fields to solve problems related to these fields. The book deals with the needs of turbulence in relation to stochastic vector fields, particularly, on three-dimensional aspects, linear problems, and stochastic model building. The text describes probability distributions and densities, including Lebesgue integration, conditional probabilities, conditional expectations, statistical independence, lack of correlation. The book also explains the significance of the moments, the properties of the

  17. Global output feedback stabilisation of stochastic high-order feedforward nonlinear systems with time-delay

    Science.gov (United States)

    Zhang, Kemei; Zhao, Cong-Ran; Xie, Xue-Jun

    2015-12-01

    This paper considers the problem of output feedback stabilisation for stochastic high-order feedforward nonlinear systems with time-varying delay. By using the homogeneous domination theory and solving several troublesome obstacles in the design and analysis, an output feedback controller is constructed to drive the closed-loop system globally asymptotically stable in probability.

  18. RES: Regularized Stochastic BFGS Algorithm

    Science.gov (United States)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  19. Introduction to high-dimensional statistics

    CERN Document Server

    Giraud, Christophe

    2015-01-01

    Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha

  20. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  1. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  2. Topology of high-dimensional manifolds

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)

    2002-08-15

    The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.

  3. A study on stochastic resonance of one-dimensional bistable system in the neighborhood of bifurcation point with the moment method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Guangjun [State Key Laboratory of Mechanical Structural Strength and Vibration, School of Architectural Engineering and Mechanics, Xi' an Jiaotong University, Xi' an, Shaanxi (China); Xu Jianxue [State Key Laboratory of Mechanical Structural Strength and Vibration, School of Architectural Engineering and Mechanics, Xi' an Jiaotong University, Xi' an, Shaanxi (China)] e-mail: jxxu@mail.xjtu.edu.cn

    2006-02-01

    This paper analyzes the stochastic resonance induced by a novel transition of one-dimensional bistable system in the neighborhood of bifurcation point with the method of moment, which refer to the transition of system motion among a potential well of stable fixed point before bifurcation of original system and double-well potential of two coexisting stable fixed points after original system bifurcation at the presence of internal noise. The results show: the semi-analytical result of stochastic resonance of one-dimensional bistable system in the neighborhood of bifurcation point may be obtained, and the semi-analytical result is in accord with the one of Monte Carlo simulation qualitatively, the occurrence of stochastic resonance is related to the bifurcation of noisy nonlinear dynamical system moment equations, which induce the transfer of energy of ensemble average (Ex) of system response in each frequency component and make the energy of ensemble average of system response concentrate on the frequency of input signal, stochastic resonance occurs.

  4. A study on stochastic resonance of one-dimensional bistable system in the neighborhood of bifurcation point with the moment method

    International Nuclear Information System (INIS)

    Zhang Guangjun; Xu Jianxue

    2006-01-01

    This paper analyzes the stochastic resonance induced by a novel transition of one-dimensional bistable system in the neighborhood of bifurcation point with the method of moment, which refer to the transition of system motion among a potential well of stable fixed point before bifurcation of original system and double-well potential of two coexisting stable fixed points after original system bifurcation at the presence of internal noise. The results show: the semi-analytical result of stochastic resonance of one-dimensional bistable system in the neighborhood of bifurcation point may be obtained, and the semi-analytical result is in accord with the one of Monte Carlo simulation qualitatively, the occurrence of stochastic resonance is related to the bifurcation of noisy nonlinear dynamical system moment equations, which induce the transfer of energy of ensemble average (Ex) of system response in each frequency component and make the energy of ensemble average of system response concentrate on the frequency of input signal, stochastic resonance occurs

  5. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan

    2017-03-27

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  6. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert; Ombao, Hernando

    2017-01-01

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  7. Statistics, distillation, and ordering emergence in a two-dimensional stochastic model of particles in counterflowing streams

    Science.gov (United States)

    Stock, Eduardo Velasco; da Silva, Roberto; Fernandes, H. A.

    2017-07-01

    In this paper, we propose a stochastic model which describes two species of particles moving in counterflow. The model generalizes the theoretical framework that describes the transport in random systems by taking into account two different scenarios: particles can work as mobile obstacles, whereas particles of one species move in the opposite direction to the particles of the other species, or particles of a given species work as fixed obstacles remaining in their places during the time evolution. We conduct a detailed study about the statistics concerning the crossing time of particles, as well as the effects of the lateral transitions on the time required to the system reaches a state of complete geographic separation of species. The spatial effects of jamming are also studied by looking into the deformation of the concentration of particles in the two-dimensional corridor. Finally, we observe in our study the formation of patterns of lanes which reach the steady state regardless of the initial conditions used for the evolution. A similar result is also observed in real experiments involving charged colloids motion and simulations of pedestrian dynamics based on Langevin equations, when periodic boundary conditions are considered (particles counterflow in a ring symmetry). The results obtained through Monte Carlo simulations and numerical integrations are in good agreement with each other. However, differently from previous studies, the dynamics considered in this work is not Newton-based, and therefore, even artificial situations of self-propelled objects should be studied in this first-principles modeling.

  8. Mathematical algorithm development and parametric studies with the GEOFRAC three-dimensional stochastic model of natural rock fracture systems

    Science.gov (United States)

    Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.

    2014-06-01

    This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.

  9. The method of separation for evolutionary spectral density estimation of multi-variate and multi-dimensional non-stationary stochastic processes

    KAUST Repository

    Schillinger, Dominik

    2013-07-01

    The method of separation can be used as a non-parametric estimation technique, especially suitable for evolutionary spectral density functions of uniformly modulated and strongly narrow-band stochastic processes. The paper at hand provides a consistent derivation of method of separation based spectrum estimation for the general multi-variate and multi-dimensional case. The validity of the method is demonstrated by benchmark tests with uniformly modulated spectra, for which convergence to the analytical solution is demonstrated. The key advantage of the method of separation is the minimization of spectral dispersion due to optimum time- or space-frequency localization. This is illustrated by the calibration of multi-dimensional and multi-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed. © 2013 Elsevier Ltd.

  10. Stochastic clustering of material surface under high-heat plasma load

    Science.gov (United States)

    Budaev, Viacheslav P.

    2017-11-01

    The results of a study of a surface formed by high-temperature plasma loads on various materials such as tungsten, carbon and stainless steel are presented. High-temperature plasma irradiation leads to an inhomogeneous stochastic clustering of the surface with self-similar granularity - fractality on the scale from nanoscale to macroscales. Cauliflower-like structure of tungsten and carbon materials are formed under high heat plasma load in fusion devices. The statistical characteristics of hierarchical granularity and scale invariance are estimated. They differ qualitatively from the roughness of the ordinary Brownian surface, which is possibly due to the universal mechanisms of stochastic clustering of material surface under the influence of high-temperature plasma.

  11. Three-dimensional stochastic model of actin–myosin binding in the sarcomere lattice

    Energy Technology Data Exchange (ETDEWEB)

    Mijailovich, Srboljub M.; Kayser-Herold, Oliver; Stojanovic, Boban; Nedic, Djordje; Irving, Thomas C.; Geeves, MA (Harvard); (IIT); (U. Kent); (Kragujevac)

    2016-11-18

    The effect of molecule tethering in three-dimensional (3-D) space on bimolecular binding kinetics is rarely addressed and only occasionally incorporated into models of cell motility. The simplest system that can quantitatively determine this effect is the 3-D sarcomere lattice of the striated muscle, where tethered myosin in thick filaments can only bind to a relatively small number of available sites on the actin filament, positioned within a limited range of thermal movement of the myosin head. Here we implement spatially explicit actomyosin interactions into the multiscale Monte Carlo platform MUSICO, specifically defining how geometrical constraints on tethered myosins can modulate state transition rates in the actomyosin cycle. The simulations provide the distribution of myosin bound to sites on actin, ensure conservation of the number of interacting myosins and actin monomers, and most importantly, the departure in behavior of tethered myosin molecules from unconstrained myosin interactions with actin. In addition, MUSICO determines the number of cross-bridges in each actomyosin cycle state, the force and number of attached cross-bridges per myosin filament, the range of cross-bridge forces and accounts for energy consumption. At the macroscopic scale, MUSICO simulations show large differences in predicted force-velocity curves and in the response during early force recovery phase after a step change in length comparing to the two simplest mass action kinetic models. The origin of these differences is rooted in the different fluxes of myosin binding and corresponding instantaneous cross-bridge distributions and quantitatively reflects a major flaw of the mathematical description in all mass action kinetic models. Consequently, this new approach shows that accurate recapitulation of experimental data requires significantly different binding rates, number of actomyosin states, and cross-bridge elasticity than typically used in mass action kinetic models to

  12. Global stability of stochastic high-order neural networks with discrete and distributed delays

    International Nuclear Information System (INIS)

    Wang Zidong; Fang Jianan; Liu Xiaohui

    2008-01-01

    High-order neural networks can be considered as an expansion of Hopfield neural networks, and have stronger approximation property, faster convergence rate, greater storage capacity, and higher fault tolerance than lower-order neural networks. In this paper, the global asymptotic stability analysis problem is considered for a class of stochastic high-order neural networks with discrete and distributed time-delays. Based on an Lyapunov-Krasovskii functional and the stochastic stability analysis theory, several sufficient conditions are derived, which guarantee the global asymptotic convergence of the equilibrium point in the mean square. It is shown that the stochastic high-order delayed neural networks under consideration are globally asymptotically stable in the mean square if two linear matrix inequalities (LMIs) are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox. It is also shown that the main results in this paper cover some recently published works. A numerical example is given to demonstrate the usefulness of the proposed global stability criteria

  13. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  14. Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning

    Science.gov (United States)

    Sagun, Levent

    This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would

  15. Realizations of highly heterogeneous collagen networks via stochastic reconstruction for micromechanical analysis of tumor cell invasion

    Science.gov (United States)

    Nan, Hanqing; Liang, Long; Chen, Guo; Liu, Liyu; Liu, Ruchuan; Jiao, Yang

    2018-03-01

    Three-dimensional (3D) collective cell migration in a collagen-based extracellular matrix (ECM) is among one of the most significant topics in developmental biology, cancer progression, tissue regeneration, and immune response. Recent studies have suggested that collagen-fiber mediated force transmission in cellularized ECM plays an important role in stress homeostasis and regulation of collective cellular behaviors. Motivated by the recent in vitro observation that oriented collagen can significantly enhance the penetration of migrating breast cancer cells into dense Matrigel which mimics the intravasation process in vivo [Han et al. Proc. Natl. Acad. Sci. USA 113, 11208 (2016), 10.1073/pnas.1610347113], we devise a procedure for generating realizations of highly heterogeneous 3D collagen networks with prescribed microstructural statistics via stochastic optimization. Specifically, a collagen network is represented via the graph (node-bond) model and the microstructural statistics considered include the cross-link (node) density, valence distribution, fiber (bond) length distribution, as well as fiber orientation distribution. An optimization problem is formulated in which the objective function is defined as the squared difference between a set of target microstructural statistics and the corresponding statistics for the simulated network. Simulated annealing is employed to solve the optimization problem by evolving an initial network via random perturbations to generate realizations of homogeneous networks with randomly oriented fibers, homogeneous networks with aligned fibers, heterogeneous networks with a continuous variation of fiber orientation along a prescribed direction, as well as a binary system containing a collagen region with aligned fibers and a dense Matrigel region with randomly oriented fibers. The generation and propagation of active forces in the simulated networks due to polarized contraction of an embedded ellipsoidal cell and a small group

  16. SUPPESSION OF LARGE EDGE LOCALIZED MODES IN HIGH CONFINEMENT DIII-D PLASMAS WITH A STOCHASTIC MAGNETIC BOUNDARY

    International Nuclear Information System (INIS)

    EVANS, TE; MOYER, RA; THOMAS, PR; WATKINS, JG; OSBORNE, TH; BOEDO, JA; FENSTERMACHER, ME; FINKEN, KH; GROEBNER, RJ; GROTH, M; HARRIS, JH; LAHAYE, RJ; LASNIER, CJ; MASUZAKI, S; OHYABU, N; PRETTY, D; RHODES, TL; REIMERDES, H; RUDAKOV, DL; SCHAFFER, MJ; WANG, G; ZENG, L.

    2003-01-01

    OAK-B135 A stochastic magnetic boundary, produced by an externally applied edge resonant magnetic perturbation, is used to suppress large edge localized modes (ELMs) in high confinement (H-mode) plasmas. The resulting H-mode displays rapid, small oscillations with a bursty character modulated by a coherent 130 Hz envelope. The H-mode transport barrier is unaffected by the stochastic boundary. The core confinement of these discharges is unaffected, despite a three-fold drop in the toroidal rotation in the plasma core. These results demonstrate that stochastic boundaries are compatible with H-modes and may be attractive for ELM control in next-step burning fusion tokamaks

  17. Suppression of large edge-localized modes in high-confinement DIII-D plasmas with a stochastic magnetic boundary.

    Science.gov (United States)

    Evans, T E; Moyer, R A; Thomas, P R; Watkins, J G; Osborne, T H; Boedo, J A; Doyle, E J; Fenstermacher, M E; Finken, K H; Groebner, R J; Groth, M; Harris, J H; La Haye, R J; Lasnier, C J; Masuzaki, S; Ohyabu, N; Pretty, D G; Rhodes, T L; Reimerdes, H; Rudakov, D L; Schaffer, M J; Wang, G; Zeng, L

    2004-06-11

    A stochastic magnetic boundary, produced by an applied edge resonant magnetic perturbation, is used to suppress most large edge-localized modes (ELMs) in high confinement (H-mode) plasmas. The resulting H mode displays rapid, small oscillations with a bursty character modulated by a coherent 130 Hz envelope. The H mode transport barrier and core confinement are unaffected by the stochastic boundary, despite a threefold drop in the toroidal rotation. These results demonstrate that stochastic boundaries are compatible with H modes and may be attractive for ELM control in next-step fusion tokamaks.

  18. Quantum stochastics

    CERN Document Server

    Chang, Mou-Hsiung

    2015-01-01

    The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations, quantum Markov semigrou...

  19. Inverse stochastic-dynamic models for high-resolution Greenland ice core records

    DEFF Research Database (Denmark)

    Boers, Niklas; Chekroun, Mickael D.; Liu, Honghu

    2017-01-01

    as statistical properties such as probability density functions, waiting times and power spectra, with no need for any external forcing. The crucial ingredients for capturing these properties are (i) high-resolution training data, (ii) cubic drift terms, (iii) nonlinear coupling terms between the 18O and dust......Proxy records from Greenland ice cores have been studied for several decades, yet many open questions remain regarding the climate variability encoded therein. Here, we use a Bayesian framework for inferring inverse, stochastic-dynamic models from 18O and dust records of unprecedented, subdecadal...

  20. High-resolution stochastic generation of extreme rainfall intensity for urban drainage modelling applications

    Science.gov (United States)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2016-04-01

    Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.

  1. Pores-scale hydrodynamics in a progressively bio-clogged three-dimensional porous medium: 3D particle tracking experiments and stochastic transport modelling

    Science.gov (United States)

    Morales, V. L.; Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.

    2017-12-01

    Biofilms are ubiquitous bacterial communities growing in various porous media including soils, trickling and sand filters and are relevant for applications such as the degradation of pollutants for bioremediation, waste water or drinking water production purposes. By their development, biofilms dynamically change the structure of porous media, increasing the heterogeneity of the pore network and the non-Fickian or anomalous dispersion. In this work, we use an experimental approach to investigate the influence of biofilm growth on pore scale hydrodynamics and transport processes and propose a correlated continuous time random walk model capturing these observations. We perform three-dimensional particle tracking velocimetry at four different time points from 0 to 48 hours of biofilm growth. The biofilm growth notably impacts pore-scale hydrodynamics, as shown by strong increase of the average velocity and in tailing of Lagrangian velocity probability density functions. Additionally, the spatial correlation length of the flow increases substantially. This points at the formation of preferential flow pathways and stagnation zones, which ultimately leads to an increase of anomalous transport in the porous media considered, characterized by non-Fickian scaling of mean-squared displacements and non-Gaussian distributions of the displacement probability density functions. A gamma distribution provides a remarkable approximation of the bulk and the high tail of the Lagrangian pore-scale velocity magnitude, indicating a transition from a parallel pore arrangement towards a more serial one. Finally, a correlated continuous time random walk based on a stochastic relation velocity model accurately reproduces the observations and could be used to predict transport beyond the time scales accessible to the experiment.

  2. Fatigue in Welded High-Strength Steel Plate Elements under Stochastic Loading

    DEFF Research Database (Denmark)

    Agerskov, Henning; Petersen, R.I.; Martinez, L. Lopez

    1999-01-01

    The present project is a part of an investigation on fatigue in offshore structures in high-strength steel. The fatigue life of plate elements with welded attachments is studied. The material used has a yield stress of ~ 810-840 MPa, and high weldability and toughness properties. Fatigue test...... series with constant amplitude loading and with various types of stochastic loading have been carried through on test specimens in high-strength steel, and - for a comparison - on test specimens in conventional offshore structural steel with a yield stress of ~ 400-410 MPa.A comparison between constant...... amplitude and variable amplitude fatigue test results shows shorter fatigue lives in variable amplitude loading than should be expected from the linear fatigue damage accumulation formula. Furthermore, in general longer fatigue lives were obtained for the test specimens in high-strength steel than those...

  3. Transport in Stochastic Media

    International Nuclear Information System (INIS)

    Haran, O.; Shvarts, D.; Thieberger, R.

    1998-01-01

    Classical transport of neutral particles in a binary, scattering, stochastic media is discussed. It is assumed that the cross-sections of the constituent materials and their volume fractions are known. The inner structure of the media is stochastic, but there exist a statistical knowledge about the lump sizes, shapes and arrangement. The transmission through the composite media depends on the specific heterogeneous realization of the media. The current research focuses on the averaged transmission through an ensemble of realizations, frm which an effective cross-section for the media can be derived. The problem of one dimensional transport in stochastic media has been studied extensively [1]. In the one dimensional description of the problem, particles are transported along a line populated with alternating material segments of random lengths. The current work discusses transport in two-dimensional stochastic media. The phenomenon that is unique to the multi-dimensional description of the problem is obstacle bypassing. Obstacle bypassing tends to reduce the opacity of the media, thereby reducing its effective cross-section. The importance of this phenomenon depends on the manner in which the obstacles are arranged in the media. Results of transport simulations in multi-dimensional stochastic media are presented. Effective cross-sections derived from the simulations are compared against those obtained for the one-dimensional problem, and against those obtained from effective multi-dimensional models, which are partially based on a Markovian assumption

  4. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  5. Calibration of semi-stochastic procedure for simulating high-frequency ground motions

    Science.gov (United States)

    Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert

    2013-01-01

    Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw  100 km).

  6. Diffusion with intrinsic trapping in 2-d incompressible stochastic velocity fields

    International Nuclear Information System (INIS)

    Vlad, M.; Spineanu, F.; Misguich, J.H.; Vlad, M.; Spineanu, F.; Balescu, R.

    1998-10-01

    A new statistical approach that applies to the high Kubo number regimes for particle diffusion in stochastic velocity fields is presented. This 2-dimensional model describes the partial trapping of the particles in the stochastic field. the results are close to the numerical simulations and also to the estimations based on percolation theory. (authors)

  7. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    Science.gov (United States)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  8. Numerical study of a stochastic particle algorithm solving a multidimensional population balance model for high shear granulation

    International Nuclear Information System (INIS)

    Braumann, Andreas; Kraft, Markus; Wagner, Wolfgang

    2010-01-01

    This paper is concerned with computational aspects of a multidimensional population balance model of a wet granulation process. Wet granulation is a manufacturing method to form composite particles, granules, from small particles and binders. A detailed numerical study of a stochastic particle algorithm for the solution of a five-dimensional population balance model for wet granulation is presented. Each particle consists of two types of solids (containing pores) and of external and internal liquid (located in the pores). Several transformations of particles are considered, including coalescence, compaction and breakage. A convergence study is performed with respect to the parameter that determines the number of numerical particles. Averaged properties of the system are computed. In addition, the ensemble is subdivided into practically relevant size classes and analysed with respect to the amount of mass and the particle porosity in each class. These results illustrate the importance of the multidimensional approach. Finally, the kinetic equation corresponding to the stochastic model is discussed.

  9. Solution of stochastic nonlinear PDEs using Wiener-Hermite expansion of high orders

    KAUST Repository

    El Beltagy, Mohamed

    2016-01-06

    In this work, the Wiener-Hermite Expansion (WHE) is used to solve stochastic nonlinear PDEs excited with noise. The generation of the equivalent set of deterministic integro-differential equations is automated and hence allows for high order terms of WHE. The automation difficulties are discussed, solved and implemented to output the final system to be solved. A numerical Pikard-like algorithm is suggested to solve the resulting deterministic system. The automated WHE is applied to the 1D diffusion equation and to the heat equation. The results are compared with previous solutions obtained with WHEP (WHE with perturbation) technique. The solution obtained using the suggested WHE technique is shown to be the limit of the WHEP solutions with infinite number of corrections. The automation is extended easily to account for white-noise of higher dimension and for general nonlinear PDEs.

  10. Solution of stochastic nonlinear PDEs using Wiener-Hermite expansion of high orders

    KAUST Repository

    El Beltagy, Mohamed

    2016-01-01

    In this work, the Wiener-Hermite Expansion (WHE) is used to solve stochastic nonlinear PDEs excited with noise. The generation of the equivalent set of deterministic integro-differential equations is automated and hence allows for high order terms of WHE. The automation difficulties are discussed, solved and implemented to output the final system to be solved. A numerical Pikard-like algorithm is suggested to solve the resulting deterministic system. The automated WHE is applied to the 1D diffusion equation and to the heat equation. The results are compared with previous solutions obtained with WHEP (WHE with perturbation) technique. The solution obtained using the suggested WHE technique is shown to be the limit of the WHEP solutions with infinite number of corrections. The automation is extended easily to account for white-noise of higher dimension and for general nonlinear PDEs.

  11. Transport properties of stochastic Lorentz models

    NARCIS (Netherlands)

    Beijeren, H. van

    Diffusion processes are considered for one-dimensional stochastic Lorentz models, consisting of randomly distributed fixed scatterers and one moving light particle. In waiting time Lorentz models the light particle makes instantaneous jumps between scatterers after a stochastically distributed

  12. Stochastic inequalities and applications to dynamics analysis of a novel SIVS epidemic model with jumps

    Directory of Open Access Journals (Sweden)

    Xiaona Leng

    2017-06-01

    Full Text Available Abstract This paper proposes a new nonlinear stochastic SIVS epidemic model with double epidemic hypothesis and Lévy jumps. The main purpose of this paper is to investigate the threshold dynamics of the stochastic SIVS epidemic model. By using the technique of a series of stochastic inequalities, we obtain sufficient conditions for the persistence in mean and extinction of the stochastic system and the threshold which governs the extinction and the spread of the epidemic diseases. Finally, this paper describes the results of numerical simulations investigating the dynamical effects of stochastic disturbance. Our results significantly improve and generalize the corresponding results in recent literatures. The developed theoretical methods and stochastic inequalities technique can be used to investigate the high-dimensional nonlinear stochastic differential systems.

  13. A heterogeneous stochastic FEM framework for elliptic PDEs

    International Nuclear Information System (INIS)

    Hou, Thomas Y.; Liu, Pengfei

    2015-01-01

    We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage

  14. Stochastic runaway of dynamical systems

    International Nuclear Information System (INIS)

    Pfirsch, D.; Graeff, P.

    1984-10-01

    One-dimensional, stochastic, dynamical systems are well studied with respect to their stability properties. Less is known for the higher dimensional case. This paper derives sufficient and necessary criteria for the asymptotic divergence of the entropy (runaway) and sufficient ones for the moments of n-dimensional, stochastic, dynamical systems. The crucial implication is the incompressibility of their flow defined by the equations of motion in configuration space. Two possible extensions to compressible flow systems are outlined. (orig.)

  15. High-dimensional data in economics and their (robust) analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  16. High-dimensional Data in Economics and their (Robust) Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  17. MONTE CARLO SIMULATION OF MULTIFOCAL STOCHASTIC SCANNING SYSTEM

    Directory of Open Access Journals (Sweden)

    LIXIN LIU

    2014-01-01

    Full Text Available Multifocal multiphoton microscopy (MMM has greatly improved the utilization of excitation light and imaging speed due to parallel multiphoton excitation of the samples and simultaneous detection of the signals, which allows it to perform three-dimensional fast fluorescence imaging. Stochastic scanning can provide continuous, uniform and high-speed excitation of the sample, which makes it a suitable scanning scheme for MMM. In this paper, the graphical programming language — LabVIEW is used to achieve stochastic scanning of the two-dimensional galvo scanners by using white noise signals to control the x and y mirrors independently. Moreover, the stochastic scanning process is simulated by using Monte Carlo method. Our results show that MMM can avoid oversampling or subsampling in the scanning area and meet the requirements of uniform sampling by stochastically scanning the individual units of the N × N foci array. Therefore, continuous and uniform scanning in the whole field of view is implemented.

  18. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  19. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    Science.gov (United States)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  20. Stochastic volatility and stochastic leverage

    DEFF Research Database (Denmark)

    Veraart, Almut; Veraart, Luitgard A. M.

    This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...

  1. A high-resolution stochastic model of domestic activity patterns and electricity demand

    International Nuclear Information System (INIS)

    Widen, Joakim; Waeckelgard, Ewa

    2010-01-01

    Realistic time-resolved data on occupant behaviour, presence and energy use are important inputs to various types of simulations, including performance of small-scale energy systems and buildings' indoor climate, use of lighting and energy demand. This paper presents a modelling framework for stochastic generation of high-resolution series of such data. The model generates both synthetic activity sequences of individual household members, including occupancy states, and domestic electricity demand based on these patterns. The activity-generating model, based on non-homogeneous Markov chains that are tuned to an extensive empirical time-use data set, creates a realistic spread of activities over time, down to a 1-min resolution. A detailed validation against measurements shows that modelled power demand data for individual households as well as aggregate demand for an arbitrary number of households are highly realistic in terms of end-use composition, annual and diurnal variations, diversity between households, short time-scale fluctuations and load coincidence. An important aim with the model development has been to maintain a sound balance between complexity and output quality. Although the model yields a high-quality output, the proposed model structure is uncomplicated in comparison to other available domestic load models.

  2. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    Science.gov (United States)

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  3. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    Science.gov (United States)

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  4. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  5. A note on chaotic vs. stochastic behavior of the high-latitude ionospheric plasma density fluctuations

    Directory of Open Access Journals (Sweden)

    A. W. Wernik

    1996-01-01

    Full Text Available Four data sets of density fluctuations measured in-situ by the Dynamics Explorer (DE 2 were analyzed in an attempt to study chaotic nature of the high-latitude turbulence and, in this way to complement the conventional spectral analysis. It has been found that the probability distribution function of density differences is far from Gaussian and similar to that observed in the intermittent fluid or MBD turbulence. This indicates that ionospheric density fluctuations are not stochastic but coherent to some extent. Wayland's and surrogate data tests for determinism in a time series of density data allowed us to differentiate between regions of intense shear and moderate shear. We observe that in the region of strong field aligned currents (FAC and intense shear, or along the convection in the collisional regime, ionospheric turbulence behaves like a random noise with non-Gaussian statistics implying that the underlying physical process is nondeterministic. On the other hand, when FACs are weak, and shear is moderate or observations made in the inertial regime the turbulence is chaotic. The attractor dimension is lowest (1.9 for 'old' convected irregularities. The dimension 3.2 is found for turbulence in the inertial regime and considerably smaller (2.4 in the collisional regime. It is suggested that a high dimension in the inertial regime may be caused by a complicated velocity structure in the shear instability region.

  6. PV Hosting Capacity Analysis and Enhancement Using High Resolution Stochastic Modeling

    Directory of Open Access Journals (Sweden)

    Emilio J. Palacios-Garcia

    2017-09-01

    Full Text Available Reduction of CO2 emissions is a main target in the future smart grid. This goal is boosting the installation of renewable energy resources (RES, as well as a major consumer engagement that seeks for a more efficient utilization of these resources toward the figure of ‘prosumers’. Nevertheless, these resources present an intermittent nature, which requires the presence of an energy storage system and an energy management system (EMS to ensure an uninterrupted power supply. Moreover, network-related issues might arise due to the increasing power of renewable resources installed in the grid, the storage systems also being capable of contributing to the network stability. However, to assess these future scenarios and test the control strategies, a simulation system is needed. The aim of this paper is to analyze the interaction between residential consumers with high penetration of PV generation and distributed storage and the grid by means of a high temporal resolution simulation scenario based on a stochastic residential load model and PV production records. Results of the model are presented for different PV power rates and storage capacities, as well as a two-level charging strategy as a mechanism for increasing the hosting capacity (HC of the network.

  7. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.

    Science.gov (United States)

    Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros

    2018-05-01

    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.

  8. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  9. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....

  10. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  11. High-dimensional quantum cloning and applications to quantum hacking.

    Science.gov (United States)

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim

    2017-02-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.

  12. Modeling pitting corrosion damage of high-level radioactive-waste containers, with emphasis on the stochastic approach

    Energy Technology Data Exchange (ETDEWEB)

    Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.

    1993-01-01

    Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ``failure``, and the use of ``rules-of-thumb`` to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process.

  13. Modeling pitting corrosion damage of high-level radioactive-waste containers, with emphasis on the stochastic approach

    International Nuclear Information System (INIS)

    Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.

    1993-01-01

    Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ''failure'', and the use of ''rules-of-thumb'' to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process

  14. Stochastic scheduling of local distribution systems considering high penetration of plug-in electric vehicles and renewable energy sources

    International Nuclear Information System (INIS)

    Tabatabaee, Sajad; Mortazavi, Seyed Saeedallah; Niknam, Taher

    2017-01-01

    This paper investigates the optimal scheduling of electric power units in the renewable based local distribution systems considering plug-in electric vehicles (PEVs). The appearance of PEVs in the electric grid can create new challenges for the operation of distributed generations and power units inside the network. In order to deal with this issue, a new stochastic optimization method is devised to let the central controll manage the power units and charging behavior of PEVs. The problem formulation aims to minimize the total cost of the network including the cost of power supply for loads and PEVs as well as the cost of energy not supplied (ENS) as the reliability costs. In order to make PEVs as opportunity for the grid, the vehicle-2-grid (V2G) technology is employed to reduce the operational costs. To model the high uncertain behavior of wind turbine, photovoltaics and the charging and discharging pattern of PEVs, a new stochastic power flow based on unscented transform is proposed. Finally, a new optimization algorithm based on bat algorithm (BA) is proposed to solve the problem optimally. The satisfying performance of the proposed stochastic method is tested on a grid-connected local distribution system. - Highlights: • Introduction of stochastic method to assess Plug-in Electric Vehicles effects on the microgrid. • Assessing the role of V2G technology on battery aging and degradation costs. • Use of BA for solving the proposed problem. • Introduction of a new modification method for the BA.

  15. Stochastic model of the near-to-injector spray formation assisted by a high-speed coaxial gas jet

    Energy Technology Data Exchange (ETDEWEB)

    Gorokhovski, M [Laboratoire de Mecanique des Fluides et d' Acoustique, CNRS-Ecole Centrale de Lyon-INSA Lyon-Universite Claude Bernard Lyon 1, 36 Avenue Guy de Collongue, 69131 Ecully Cedex (France); Jouanguy, J [Laboratoire de Mecanique de Lille, Ecole Centrale de Lille, Blvd Paul Langevin, 59655 Villeneuve d' Ascq Cedex (France); Chtab-Desportes, A [CD-adapco, 31 rue Delizy 93698 Pantin Cedex (France)], E-mail: mikhael.gorokhovski@ec-lyon.fr

    2009-06-01

    The stochastic model of spray formation in the vicinity of the air-blast atomizer has been described and assessed by comparison with measurements. In this model, the 3D configuration of a continuous liquid core is simulated by spatial trajectories of specifically introduced stochastic particles. The stochastic process is based on the assumption that due to a high Weber number, the exiting continuous liquid jet is depleted in the framework of statistical universalities of a cascade fragmentation under scaling symmetry. The parameters of the stochastic process have been determined according to observations from Lasheras's, Hopfinger's and Villermaux's scientific groups. The spray formation model, based on the computation of spatial distribution of the probability of finding the non-fragmented liquid jet in the near-to-injector region, is combined with the large-eddy simulation (LES) in the coaxial gas jet. Comparison with measurements reported in the literature for different values of the gas-to-liquid dynamic pressure ratio showed that the model predicts correctly the distribution of liquid in the close-to-injector region, the mean length of the liquid core, the spray angle and the typical size of droplets in the far field of spray.

  16. HSM: Heterogeneous Subspace Mining in High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Seidl, Thomas

    2009-01-01

    Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...

  17. Analysis of chaos in high-dimensional wind power system.

    Science.gov (United States)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  18. Inverse stochastic-dynamic models for high-resolution Greenland ice core records

    Science.gov (United States)

    Boers, Niklas; Chekroun, Mickael D.; Liu, Honghu; Kondrashov, Dmitri; Rousseau, Denis-Didier; Svensson, Anders; Bigler, Matthias; Ghil, Michael

    2017-12-01

    Proxy records from Greenland ice cores have been studied for several decades, yet many open questions remain regarding the climate variability encoded therein. Here, we use a Bayesian framework for inferring inverse, stochastic-dynamic models from δ18O and dust records of unprecedented, subdecadal temporal resolution. The records stem from the North Greenland Ice Core Project (NGRIP), and we focus on the time interval 59-22 ka b2k. Our model reproduces the dynamical characteristics of both the δ18O and dust proxy records, including the millennial-scale Dansgaard-Oeschger variability, as well as statistical properties such as probability density functions, waiting times and power spectra, with no need for any external forcing. The crucial ingredients for capturing these properties are (i) high-resolution training data, (ii) cubic drift terms, (iii) nonlinear coupling terms between the δ18O and dust time series, and (iv) non-Markovian contributions that represent short-term memory effects.

  19. A hybridized K-means clustering approach for high dimensional ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.

  20. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  1. On Robust Information Extraction from High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science

  2. Inference in High-dimensional Dynamic Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Tang, Haihan

    We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...

  3. Pricing High-Dimensional American Options Using Local Consistency Conditions

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and

  4. Irregular grid methods for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.

    2004-01-01

    This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of

  5. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    Science.gov (United States)

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  6. The stochastic nature of the domain wall motion along high perpendicular anisotropy strips with surface roughness

    International Nuclear Information System (INIS)

    Martinez, Eduardo

    2012-01-01

    The domain wall dynamics along thin ferromagnetic strips with high perpendicular magnetocrystalline anisotropy driven by either magnetic fields or spin-polarized currents is theoretically analyzed by means of full micromagnetic simulations and a one-dimensional model, including both surface roughness and thermal effects. At finite temperature, the results show a field dependence of the domain wall velocity in good qualitative agreement with available experimental measurements, indicating a low field, low velocity creep regime, and a high field, linear regime separated by a smeared depinning region. Similar behaviors were also observed under applied currents. In the low current creep regime the velocity-current characteristic does not depend significantly on the non-adiabaticity. At high currents, where the domain wall velocity becomes insensitive to surface pinning, the domain wall shows a precessional behavior even when the non-adiabatic parameter is equal to the Gilbert damping. These analyses confirm the relevance of both thermal fluctuations and surface roughness for the domain wall dynamics, and that complete micromagnetic modeling and one-dimensional studies taking into account these effects are required to interpret the experimental measurements in order to get a better understanding of the origin, the role and the magnitude of the non-adiabaticity. (paper)

  7. Stochastic Optimization Model to STudy the Operational Impacts of High Wind Penetrations in Ireland

    DEFF Research Database (Denmark)

    Meibom, Peter; Barth, R.; Hasche, B.

    2011-01-01

    A stochastic mixed integer linear optimization scheduling model minimizing system operation costs and treating load and wind power production as stochastic inputs is presented. The schedules are updated in a rolling manner as more up-to-date information becomes available. This is a fundamental...... change relative to day-ahead unit commitment approaches. The need for reserves dependent on forecast horizon and share of wind power has been estimated with a statistical model combining load and wind power forecast errors with scenarios of forced outages. The model is used to study operational impacts...

  8. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Webster, Clayton G [ORNL; Zhang, Guannan [ORNL; Gunzburger, Max D [ORNL

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  9. Stochastic quantisation: theme and variation

    International Nuclear Information System (INIS)

    Klauder, J.R.; Kyoto Univ.

    1987-01-01

    The paper on stochastic quantisation is a contribution to the book commemorating the sixtieth birthday of E.S. Fradkin. Stochastic quantisation reformulates Euclidean quantum field theory in the language of Langevin equations. The generalised free field is discussed from the viewpoint of stochastic quantisation. An artificial family of highly singular model theories wherein the space-time derivatives are dropped altogether is also examined. Finally a modified form of stochastic quantisation is considered. (U.K.)

  10. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    Science.gov (United States)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual

  11. Genuinely high-dimensional nonlocality optimized by complementary measurements

    International Nuclear Information System (INIS)

    Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung

    2010-01-01

    Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.

  12. A three-dimensional stochastic model of the behavior of radionuclides in forests. Part 2. Cs-137 behavior in forest soils

    International Nuclear Information System (INIS)

    Berg, Mitchell T.; Shuman, Larry J.

    1995-01-01

    Using a three-dimensional stochastic model of radionuclides in forests developed in Part 1, this work simulates the long-term behavior of Cs-137 in forest soil. It is assumed that the behavior of Cs-137 in soils is driven by its advection and dispersion due to the infiltration of the soil solution, and its sorption to the soil matrix. As Cs-137 transport through soils is affected by its uptake and release by forest vegetation, a model of radiocesium behavior in forest vegetation is presented in Part 3 of this paper. To estimate the rate of infiltration of water through the soil, models are presented to estimate the hydrological cycle of the forest including infiltration, evapotranspiration, and the root uptake of water. The state transition probabilities for the random walk model of Cs-137 transport are then estimated using the models developed to predict the distribution of water in the forest. The random walk model is then tested using a base line scenario in which Cs-137 is deposited into a coniferous forest ecosystem

  13. Stochastic price modeling of high volatility, mean-reverting, spike-prone commodities: The Australian wholesale spot electricity market

    International Nuclear Information System (INIS)

    Higgs, Helen; Worthington, Andrew

    2008-01-01

    It is commonly known that wholesale spot electricity markets exhibit high price volatility, strong mean-reversion and frequent extreme price spikes. This paper employs a basic stochastic model, a mean-reverting model and a regime-switching model to capture these features in the Australian national electricity market (NEM), comprising the interconnected markets of New South Wales, Queensland, South Australia and Victoria. Daily spot prices from 1 January 1999 to 31 December 2004 are employed. The results show that the regime-switching model outperforms the basic stochastic and mean-reverting models. Electricity prices are also found to exhibit stronger mean-reversion after a price spike than in the normal period, and price volatility is more than fourteen times higher in spike periods than in normal periods. The probability of a spike on any given day ranges between 5.16% in NSW and 9.44% in Victoria

  14. Global exponential stability of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays.

    Science.gov (United States)

    Huang, Haiying; Du, Qiaosheng; Kang, Xibing

    2013-11-01

    In this paper, a class of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays is investigated. The jumping parameters are modeled as a continuous-time finite-state Markov chain. At first, the existence of equilibrium point for the addressed neural networks is studied. By utilizing the Lyapunov stability theory, stochastic analysis theory and linear matrix inequality (LMI) technique, new delay-dependent stability criteria are presented in terms of linear matrix inequalities to guarantee the neural networks to be globally exponentially stable in the mean square. Numerical simulations are carried out to illustrate the main results. © 2013 ISA. Published by ISA. All rights reserved.

  15. Recent developments in Bayesian inference of tokamak plasma equilibria and high-dimensional stochastic quadratures

    International Nuclear Information System (INIS)

    Von Nessi, G T; Hole, M J

    2014-01-01

    We present recent results and technical breakthroughs for the Bayesian inference of tokamak equilibria using force-balance as a prior constraint. Issues surrounding model parameter representation and posterior analysis are discussed and addressed. These points motivate the recent advancements embodied in the Bayesian Equilibrium Analysis and Simulation Tool (BEAST) software being presently utilized to study equilibria on the Mega-Ampere Spherical Tokamak (MAST) experiment in the UK (von Nessi et al 2012 J. Phys. A 46 185501). State-of-the-art results of using BEAST to study MAST equilibria are reviewed, with recent code advancements being systematically presented though out the manuscript. (paper)

  16. One-dimensional model for QCD at high energy

    International Nuclear Information System (INIS)

    Iancu, E.; Santana Amaral, J.T. de; Soyez, G.; Triantafyllopoulos, D.N.

    2007-01-01

    We propose a stochastic particle model in (1+1) dimensions, with one dimension corresponding to rapidity and the other one to the transverse size of a dipole in QCD, which mimics high-energy evolution and scattering in QCD in the presence of both saturation and particle-number fluctuations, and hence of pomeron loops. The model evolves via non-linear particle splitting, with a non-local splitting rate which is constrained by boost-invariance and multiple scattering. The splitting rate saturates at high density, so like the gluon emission rate in the JIMWLK evolution. In the mean field approximation obtained by ignoring fluctuations, the model exhibits the hallmarks of the BK equation, namely a BFKL-like evolution at low density, the formation of a traveling wave, and geometric scaling. In the full evolution including fluctuations, the geometric scaling is washed out at high energy and replaced by diffusive scaling. It is likely that the model belongs to the universality class of the reaction-diffusion process. The analysis of the model sheds new light on the pomeron loops equations in QCD and their possible improvements

  17. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  18. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...

  19. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  20. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  1. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  2. High-dimensional change-point estimation: Combining filtering with convex optimization

    OpenAIRE

    Soh, Yong Sheng; Chandrasekaran, Venkat

    2017-01-01

    We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...

  3. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  4. Manifold learning to interpret JET high-dimensional operational space

    International Nuclear Information System (INIS)

    Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A

    2013-01-01

    In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)

  5. Markov stochasticity coordinates

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2017-01-01

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  6. Markov stochasticity coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    2017-01-15

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  7. Elucidating high-dimensional cancer hallmark annotation via enriched ontology.

    Science.gov (United States)

    Yan, Shankai; Wong, Ka-Chun

    2017-09-01

    Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. A stochastic six-degree-of-freedom flight simulator for passively controlled high power rockets

    OpenAIRE

    Box, Simon; Bishop, Christopher M.; Hunt, Hugh

    2011-01-01

    This paper presents a method for simulating the flight of a passively controlled rocket in six degrees of freedom, and the descent under parachute in three degrees of freedom, Also presented is a method for modelling the uncertainty in both the rocket dynamics and the atmospheric conditions using stochastic parameters and the Monte-Carlo method. Included within this we present a method for quantifying the uncertainty in the atmospheric conditions using historical atmospheric data. The core si...

  9. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    Science.gov (United States)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  10. Bayesian Inference of High-Dimensional Dynamical Ocean Models

    Science.gov (United States)

    Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.

    2015-12-01

    This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.

  11. Three-Dimensional Electromagnetic High Frequency Axisymmetric Cavity Scars.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt

    2014-10-01

    This report examines the localization of high frequency electromagnetic fi elds in three-dimensional axisymmetric cavities along periodic paths between opposing sides of the cavity. The cases where these orbits lead to unstable localized modes are known as scars. This report treats both the case where the opposing sides, or mirrors, are convex, where there are no interior foci, and the case where they are concave, leading to interior foci. The scalar problem is treated fi rst but the approximations required to treat the vector fi eld components are also examined. Particular att ention is focused on the normalization through the electromagnetic energy theorem. Both projections of the fi eld along the scarred orbit as well as point statistics are examined. Statistical comparisons are m ade with a numerical calculation of the scars run with an axisymmetric simulation. This axisymmetric cas eformstheoppositeextreme(wherethetwomirror radii at each end of the ray orbit are equal) from the two -dimensional solution examined previously (where one mirror radius is vastly di ff erent from the other). The enhancement of the fi eldontheorbitaxiscanbe larger here than in the two-dimensional case. Intentionally Left Blank

  12. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  13. Hawking radiation of a high-dimensional rotating black hole

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)

    2010-01-15

    We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)

  14. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  15. High-dimensional quantum channel estimation using classical light

    CSIR Research Space (South Africa)

    Mabena, Chemist M

    2017-11-01

    Full Text Available stream_source_info Mabena_20007_2017.pdf.txt stream_content_type text/plain stream_size 960 Content-Encoding UTF-8 stream_name Mabena_20007_2017.pdf.txt Content-Type text/plain; charset=UTF-8 PHYSICAL REVIEW A 96, 053860... (2017) High-dimensional quantum channel estimation using classical light Chemist M. Mabena CSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa and School of Physics, University of the Witwatersrand, Johannesburg 2000, South...

  16. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  17. High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems

    International Nuclear Information System (INIS)

    Wachowiak, M P; Sarlo, B B; Foster, A E Lambe

    2014-01-01

    Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task

  18. Simulations of dimensionally reduced effective theories of high temperature QCD

    CERN Document Server

    Hietanen, Ari

    Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by perf...

  19. High-Dimensional Quantum Information Processing with Linear Optics

    Science.gov (United States)

    Fitzpatrick, Casey A.

    Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for

  20. Decentralized adaptive neural control for high-order interconnected stochastic nonlinear time-delay systems with unknown system dynamics.

    Science.gov (United States)

    Si, Wenjie; Dong, Xunde; Yang, Feifei

    2018-03-01

    This paper is concerned with the problem of decentralized adaptive backstepping state-feedback control for uncertain high-order large-scale stochastic nonlinear time-delay systems. For the control design of high-order large-scale nonlinear systems, only one adaptive parameter is constructed to overcome the over-parameterization, and neural networks are employed to cope with the difficulties raised by completely unknown system dynamics and stochastic disturbances. And then, the appropriate Lyapunov-Krasovskii functional and the property of hyperbolic tangent functions are used to deal with the unknown unmatched time-delay interactions of high-order large-scale systems for the first time. At last, on the basis of Lyapunov stability theory, the decentralized adaptive neural controller was developed, and it decreases the number of learning parameters. The actual controller can be designed so as to ensure that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) and the tracking error converges in the small neighborhood of zero. The simulation example is used to further show the validity of the design method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. The method of separation for evolutionary spectral density estimation of multi-variate and multi-dimensional non-stationary stochastic processes

    KAUST Repository

    Schillinger, Dominik; Stefanov, Dimitar; Stavrev, Atanas

    2013-01-01

    -variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed

  2. High energy hadron dynamics based on a Stochastic-field multi-eikonal theory

    International Nuclear Information System (INIS)

    Arnold, R.C.

    1977-06-01

    Multi-eikonal theory, using a stoichastic-field representation for collective long range rapidity correlations, is developed and applied to the calculation of Regge pole parameters, high transverse momentum enhancements, and fluctuation patterns in rapidity densities. If a short-range-order model, such as the one-dimensional planar bootstrap, with only leading t-channel meson poles, is utilized as input to the multi-eikonal method, the pole spectrum is modified in three ways; promotion and renormalization of leading trajectories (suggesting an effective pomeron above unity at intermediate energies), and a proliferation of dynamical secondary trajectories, reminiscent of dual models. When transverse dimensions are included, the collective effects produce a growth with energy of large-P/sub tau/ inclusive cross-sections. Typical-event rapidity distributions, at energies of a few TeV, can be estimated by suitable approximations; the fluctuations give rise to ''domain'' patterns, which have the appearance of clusters separated by rapidity gaps. The relations between this approach to strong-interaction dynamics and a possible unification of weak, electromagnetic, and strong interactions are outlined

  3. Research on nonlinear stochastic dynamical price model

    International Nuclear Information System (INIS)

    Li Jiaorui; Xu Wei; Xie Wenxian; Ren Zhengzheng

    2008-01-01

    In consideration of many uncertain factors existing in economic system, nonlinear stochastic dynamical price model which is subjected to Gaussian white noise excitation is proposed based on deterministic model. One-dimensional averaged Ito stochastic differential equation for the model is derived by using the stochastic averaging method, and applied to investigate the stability of the trivial solution and the first-passage failure of the stochastic price model. The stochastic price model and the methods presented in this paper are verified by numerical studies

  4. Stochastic substitute for coupled rate equations in the modeling of highly ionized transient plasmas

    International Nuclear Information System (INIS)

    Eliezer, S.; Falquina, R.; Minguez, E.

    1994-01-01

    Plasmas produced by intense laser pulses incident on solid targets often do not satisfy the conditions for local thermodynamic equilibrium, and so cannot be modeled by transport equations relying on equations of state. A proper description involves an excessively large number of coupled rate equations connecting many quantum states of numerous species having different degrees of ionization. Here we pursue a recent suggestion to model the plasma by a few dominant states perturbed by a stochastic driving force. The driving force is taken to be a Poisson impulse process, giving a Langevin equation which is equivalent to a Fokker-Planck equation for the probability density governing the distribution of electron density. An approximate solution to the Langevin equation permits calculation of the characteristic relaxation rate. An exact stationary solution to the Fokker-Planck equation is given as a function of the strength of the stochastic driving force. This stationary solution is used, along with a Laplace transform, to convert the Fokker-Planck equation to one of Schroedinger type. We consider using the classical Hamiltonian formalism and the WKB method to obtain the time-dependent solution

  5. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  6. High-dimensional single-cell cancer biology.

    Science.gov (United States)

    Irish, Jonathan M; Doxie, Deon B

    2014-01-01

    Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.

  7. Stochastic quantization and gauge theories

    International Nuclear Information System (INIS)

    Kolck, U. van.

    1987-01-01

    Stochastic quantization is presented taking the Flutuation-Dissipation Theorem as a guide. It is shown that the original approach of Parisi and Wu to gauge theories fails to give the right results to gauge invariant quantities when dimensional regularization is used. Although there is a simple solution in an abelian theory, in the non-abelian case it is probably necessary to start from a BRST invariant action instead of a gauge invariant one. Stochastic regularizations are also discussed. (author) [pt

  8. Optimal Liquidation under Stochastic Liquidity

    OpenAIRE

    Becherer, Dirk; Bilarev, Todor; Frentrup, Peter

    2016-01-01

    We solve explicitly a two-dimensional singular control problem of finite fuel type for infinite time horizon. The problem stems from the optimal liquidation of an asset position in a financial market with multiplicative and transient price impact. Liquidity is stochastic in that the volume effect process, which determines the inter-temporal resilience of the market in spirit of Predoiu, Shaikhet and Shreve (2011), is taken to be stochastic, being driven by own random noise. The optimal contro...

  9. Stochastic processes

    CERN Document Server

    Parzen, Emanuel

    1962-01-01

    Well-written and accessible, this classic introduction to stochastic processes and related mathematics is appropriate for advanced undergraduate students of mathematics with a knowledge of calculus and continuous probability theory. The treatment offers examples of the wide variety of empirical phenomena for which stochastic processes provide mathematical models, and it develops the methods of probability model-building.Chapter 1 presents precise definitions of the notions of a random variable and a stochastic process and introduces the Wiener and Poisson processes. Subsequent chapters examine

  10. Estimation of the local response to a forcing in a high dimensional system using the fluctuation-dissipation theorem

    Directory of Open Access Journals (Sweden)

    F. C. Cooper

    2013-04-01

    Full Text Available The fluctuation-dissipation theorem (FDT has been proposed as a method of calculating the response of the earth's atmosphere to a forcing. For this problem the high dimensionality of the relevant data sets makes truncation necessary. Here we propose a method of truncation based upon the assumption that the response to a localised forcing is spatially localised, as an alternative to the standard method of choosing a number of the leading empirical orthogonal functions. For systems where this assumption holds, the response to any sufficiently small non-localised forcing may be estimated using a set of truncations that are chosen algorithmically. We test our algorithm using 36 and 72 variable versions of a stochastic Lorenz 95 system of ordinary differential equations. We find that, for long integrations, the bias in the response estimated by the FDT is reduced from ~75% of the true response to ~30%.

  11. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  12. Class prediction for high-dimensional class-imbalanced data

    Directory of Open Access Journals (Sweden)

    Lusa Lara

    2010-10-01

    Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class

  13. High-dimensional quantum cryptography with twisted light

    International Nuclear Information System (INIS)

    Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J

    2015-01-01

    Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)

  14. Stochastic quantization

    International Nuclear Information System (INIS)

    Klauder, J.R.

    1983-01-01

    The author provides an introductory survey to stochastic quantization in which he outlines this new approach for scalar fields, gauge fields, fermion fields, and condensed matter problems such as electrons in solids and the statistical mechanics of quantum spins. (Auth.)

  15. Stochastic parallel gradient descent based adaptive optics used for a high contrast imaging coronagraph

    International Nuclear Information System (INIS)

    Dong Bing; Ren Deqing; Zhang Xi

    2011-01-01

    An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10 -3 to 10 -4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.

  16. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    Science.gov (United States)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  17. Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian

    2011-01-01

    The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...

  18. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  19. Quantum correlation of high dimensional system in a dephasing environment

    Science.gov (United States)

    Ji, Yinghua; Ke, Qiang; Hu, Juju

    2018-05-01

    For a high dimensional spin-S system embedded in a dephasing environment, we theoretically analyze the time evolutions of quantum correlation and entanglement via Frobenius norm and negativity. The quantum correlation dynamics can be considered as a function of the decoherence parameters, including the ratio between the system oscillator frequency ω0 and the reservoir cutoff frequency ωc , and the different environment temperature. It is shown that the quantum correlation can not only measure nonclassical correlation of the considered system, but also perform a better robustness against the dissipation. In addition, the decoherence presents the non-Markovian features and the quantum correlation freeze phenomenon. The former is much weaker than that in the sub-Ohmic or Ohmic thermal reservoir environment.

  20. Evaluating Clustering in Subspace Projections of High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Günnemann, Stephan; Assent, Ira

    2009-01-01

    Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...

  1. Statistical mechanics of complex neural systems and high dimensional data

    International Nuclear Information System (INIS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-01-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. (paper)

  2. Influence of incoherent scattering on stochastic deflection of high-energy negative particle beams in bent crystals

    Energy Technology Data Exchange (ETDEWEB)

    Kirillin, I.V. [Akhiezer Institute for Theoretical Physics, National Science Center ' ' Kharkov Institute of Physics and Technology' ' , Kharkov (Ukraine); Shul' ga, N.F. [Akhiezer Institute for Theoretical Physics, National Science Center ' ' Kharkov Institute of Physics and Technology' ' , Kharkov (Ukraine); V.N. Karazin Kharkov National University, Kharkov (Ukraine); Bandiera, L. [INFN Sezione di Ferrara, Ferrara (Italy); Guidi, V.; Mazzolari, A. [INFN Sezione di Ferrara, Ferrara (Italy); Universita degli Studi di Ferrara, Dipartimento di Fisica e Scienze della Terra, Ferrara (Italy)

    2017-02-15

    An investigation on stochastic deflection of high-energy negatively charged particles in a bent crystal was carried out. On the basis of analytical calculation and numerical simulation it was shown that there is a maximum angle at which most of the beam is deflected. The existence of a maximum, which is taken in the correspondence of the optimal radius of curvature, is a novelty with respect to the case of positively charged particles, for which the deflection angle can be freely increased by increasing the crystal length. This difference has to be ascribed to the stronger contribution of incoherent scattering affecting the dynamics of negative particles that move closer to atomic nuclei and electrons. We therefore identified the ideal parameters for the exploitation of axial confinement for negatively charged particle beam manipulation in future high-energy accelerators, e.g., ILC or muon colliders. (orig.)

  3. STOCHASTIC ASSESSMENT OF NIGERIAN STOCHASTIC ...

    African Journals Online (AJOL)

    eobe

    STOCHASTIC ASSESSMENT OF NIGERIAN WOOD FOR BRIDGE DECKS ... abandoned bridges with defects only in their decks in both rural and urban locations can be effectively .... which can be seen as the detection of rare physical.

  4. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  5. Exploring the stochastic and deterministic aspects of cyclic emission variability on a high speed spark-ignition engine

    International Nuclear Information System (INIS)

    Karvountzis-Kontakiotis, A.; Dimaratos, A.; Ntziachristos, L.; Samaras, Z.

    2017-01-01

    This study contributes to the understanding of cycle-to-cycle emissions variability (CEV) in premixed spark-ignition combustion engines. A number of experimental investigations of cycle-to-cycle combustion variability (CCV) exist in published literature; however only a handful of studies deal with CEV. This study experimentally investigates the impact of CCV on CEV of NO and CO, utilizing experimental results from a high-speed spark-ignition engine. Both CEV and CCV are shown to comprise a deterministic and a stochastic component. Results show that at maximum break torque (MBT) operation, the indicated mean effective pressure (IMEP) maximizes and its coefficient of variation (COV_I_M_E_P) minimizes, leading to minimum variation of NO. NO variability and hence mean NO levels can be reduced by more than 50% and 30%, respectively, at advanced ignition timing, by controlling the deterministic CCV using cycle resolved combustion control. The deterministic component of CEV increases at lean combustion (lambda = 1.12) and this overall increases NO variability. CEV was also found to decrease with engine load. At steady speed, increasing throttle position from 20% to 80%, decreased COV_I_M_E_P, COV_N_O and COV_C_O by 59%, 46%, and 6% respectively. Highly resolved engine control, by means of cycle-to-cycle combustion control, appears as key to limit the deterministic feature of cyclic variability and by that to overall reduce emission levels. - Highlights: • Engine emissions variability comprise both stochastic and deterministic components. • Lean and diluted combustion conditions increase emissions variability. • Advanced ignition timing enhances the deterministic component of variability. • Load increase decreases the deterministic component of variability. • The deterministic component can be reduced by highly resolved combustion control.

  6. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus

    2013-11-12

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  7. Design guidelines for high dimensional stability of CFRP optical bench

    Science.gov (United States)

    Desnoyers, Nichola; Boucher, Marc-André; Goyette, Philippe

    2013-09-01

    In carbon fiber reinforced plastic (CFRP) optomechanical structures, particularly when embodying reflective optics, angular stability is critical. Angular stability or warping stability is greatly affected by moisture absorption and thermal gradients. Unfortunately, it is impossible to achieve the perfect laminate and there will always be manufacturing errors in trying to reach a quasi-iso laminate. Some errors, such as those related to the angular position of each ply and the facesheet parallelism (for a bench) can be easily monitored in order to control the stability more adequately. This paper presents warping experiments and finite-element analyses (FEA) obtained from typical optomechanical sandwich structures. Experiments were done using a thermal vacuum chamber to cycle the structures from -40°C to 50°C. Moisture desorption tests were also performed for a number of specific configurations. The selected composite material for the study is the unidirectional prepreg from Tencate M55J/TC410. M55J is a high modulus fiber and TC410 is a new-generation cyanate ester designed for dimensionally stable optical benches. In the studied cases, the main contributors were found to be: the ply angular errors, laminate in-plane parallelism (between 0° ply direction of both facesheets), fiber volume fraction tolerance and joints. Final results show that some tested configurations demonstrated good warping stability. FEA and measurements are in good agreement despite the fact that some defects or fabrication errors remain unpredictable. Design guidelines to maximize the warping stability by taking into account the main dimensional stability contributors, the bench geometry and the optical mount interface are then proposed.

  8. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus; Dahmen, Wolfgang; DeVore, Ronald; Grasedyck, Lars

    2013-01-01

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called 'curse of dimensionality'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  9. A Dynamical System Exhibits High Signal-to-noise Ratio Gain by Stochastic Resonance

    Science.gov (United States)

    Makra, Peter; Gingl, Zoltan

    2003-05-01

    On the basis of mixed-signal simulations, we demonstrate that signal-to-noise ratio (SNR) gains much greater than unity can be obtained in the double-well potential through stochastic resonance (SR) with a symmetric periodic pulse train as deterministic and Gaussian white noise as random excitation. We also show that significant SNR improvement is possible in this system even for a sub-threshold sinusoid input if, instead of the commonly used narrow-band SNR, we apply an equally simple but much more realistic wide-band SNR definition. Using the latter result as an argument, we draw attention to the fact that the choice of the measure to reflect signal quality is critical with regard to the extent of signal improvement observed, and urge reconsideration of the practice prevalent in SR studies that most often the narrow-band SNR is used to characterise SR. Finally, we pose some questions concerning the possibilities of applying SNR improvement in practical set-ups.

  10. A qualitative numerical study of high dimensional dynamical systems

    Science.gov (United States)

    Albers, David James

    Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional

  11. Progress in high-dimensional percolation and random graphs

    CERN Document Server

    Heydenreich, Markus

    2017-01-01

    This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic.  The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation.  Part III, consist...

  12. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  13. On the Zeeman Effect in highly excited atoms: 2. Three-dimensional case

    International Nuclear Information System (INIS)

    Baseia, B.; Medeiros e Silva Filho, J.

    1984-01-01

    A previous result, found in two-dimensional hydrogen-atoms, is extended to the three-dimensional case. A mapping of a four-dimensional space R 4 onto R 3 , that establishes an equivalence between Coulomb and harmonic potentials, is used to show that the exact solution of the Zeeman effect in highly excited atoms, cannot be reached. (Author) [pt

  14. Stochastic Ratcheting on a Funneled Energy Landscape Is Necessary for Highly Efficient Contractility of Actomyosin Force Dipoles

    Science.gov (United States)

    Komianos, James E.; Papoian, Garegin A.

    2018-04-01

    Current understanding of how contractility emerges in disordered actomyosin networks of nonmuscle cells is still largely based on the intuition derived from earlier works on muscle contractility. In addition, in disordered networks, passive cross-linkers have been hypothesized to percolate force chains in the network, hence, establishing large-scale connectivity between local contractile clusters. This view, however, largely overlooks the free energy of cross-linker binding at the microscale, which, even in the absence of active fluctuations, provides a thermodynamic drive towards highly overlapping filamentous states. In this work, we use stochastic simulations and mean-field theory to shed light on the dynamics of a single actomyosin force dipole—a pair of antiparallel actin filaments interacting with active myosin II motors and passive cross-linkers. We first show that while passive cross-linking without motor activity can produce significant contraction between a pair of actin filaments, driven by thermodynamic favorability of cross-linker binding, a sharp onset of kinetic arrest exists at large cross-link binding energies, greatly diminishing the effectiveness of this contractility mechanism. Then, when considering an active force dipole containing nonmuscle myosin II, we find that cross-linkers can also serve as a structural ratchet when the motor dissociates stochastically from the actin filaments, resulting in significant force amplification when both molecules are present. Our results provide predictions of how actomyosin force dipoles behave at the molecular level with respect to filament boundary conditions, passive cross-linking, and motor activity, which can explicitly be tested using an optical trapping experiment.

  15. Stochastic Ratcheting on a Funneled Energy Landscape Is Necessary for Highly Efficient Contractility of Actomyosin Force Dipoles

    Directory of Open Access Journals (Sweden)

    James E. Komianos

    2018-04-01

    Full Text Available Current understanding of how contractility emerges in disordered actomyosin networks of nonmuscle cells is still largely based on the intuition derived from earlier works on muscle contractility. In addition, in disordered networks, passive cross-linkers have been hypothesized to percolate force chains in the network, hence, establishing large-scale connectivity between local contractile clusters. This view, however, largely overlooks the free energy of cross-linker binding at the microscale, which, even in the absence of active fluctuations, provides a thermodynamic drive towards highly overlapping filamentous states. In this work, we use stochastic simulations and mean-field theory to shed light on the dynamics of a single actomyosin force dipole—a pair of antiparallel actin filaments interacting with active myosin II motors and passive cross-linkers. We first show that while passive cross-linking without motor activity can produce significant contraction between a pair of actin filaments, driven by thermodynamic favorability of cross-linker binding, a sharp onset of kinetic arrest exists at large cross-link binding energies, greatly diminishing the effectiveness of this contractility mechanism. Then, when considering an active force dipole containing nonmuscle myosin II, we find that cross-linkers can also serve as a structural ratchet when the motor dissociates stochastically from the actin filaments, resulting in significant force amplification when both molecules are present. Our results provide predictions of how actomyosin force dipoles behave at the molecular level with respect to filament boundary conditions, passive cross-linking, and motor activity, which can explicitly be tested using an optical trapping experiment.

  16. Travelling fronts in stochastic Stokes’ drifts

    KAUST Repository

    Blanchet, Adrien; Dolbeault, Jean; Kowalczyk, Michał

    2008-01-01

    By analytical methods we study the large time properties of the solution of a simple one-dimensional model of stochastic Stokes' drift. Semi-explicit formulae allow us to characterize the behaviour of the solutions and compute global quantities

  17. Characterization of highly anisotropic three-dimensionally nanostructured surfaces

    International Nuclear Information System (INIS)

    Schmidt, Daniel

    2014-01-01

    Generalized ellipsometry, a non-destructive optical characterization technique, is employed to determine geometrical structure parameters and anisotropic dielectric properties of highly spatially coherent three-dimensionally nanostructured thin films grown by glancing angle deposition. The (piecewise) homogeneous biaxial layer model approach is discussed, which can be universally applied to model the optical response of sculptured thin films with different geometries and from diverse materials, and structural parameters as well as effective optical properties of the nanostructured thin films are obtained. Alternative model approaches for slanted columnar thin films, anisotropic effective medium approximations based on the Bruggeman formalism, are presented, which deliver results comparable to the homogeneous biaxial layer approach and in addition provide film constituent volume fraction parameters as well as depolarization or shape factors. Advantages of these ellipsometry models are discussed on the example of metal slanted columnar thin films, which have been conformally coated with a thin passivating oxide layer by atomic layer deposition. Furthermore, the application of an effective medium approximation approach to in-situ growth monitoring of this anisotropic thin film functionalization process is presented. It was found that structural parameters determined with the presented optical model equivalents for slanted columnar thin films agree very well with scanning electron microscope image estimates. - Highlights: • Summary of optical model strategies for sculptured thin films with arbitrary geometries • Application of the rigorous anisotropic Bruggeman effective medium applications • In-situ growth monitoring of atomic layer deposition on biaxial metal slanted columnar thin film

  18. Effects of dependence in high-dimensional multiple testing problems

    Directory of Open Access Journals (Sweden)

    van de Wiel Mark A

    2008-02-01

    Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.

  19. Microfluidic engineered high cell density three-dimensional neural cultures

    Science.gov (United States)

    Cullen, D. Kacy; Vukasinovic, Jelena; Glezer, Ari; La Placa, Michelle C.

    2007-06-01

    Three-dimensional (3D) neural cultures with cells distributed throughout a thick, bioactive protein scaffold may better represent neurobiological phenomena than planar correlates lacking matrix support. Neural cells in vivo interact within a complex, multicellular environment with tightly coupled 3D cell-cell/cell-matrix interactions; however, thick 3D neural cultures at cell densities approaching that of brain rapidly decay, presumably due to diffusion limited interstitial mass transport. To address this issue, we have developed a novel perfusion platform that utilizes forced intercellular convection to enhance mass transport. First, we demonstrated that in thick (>500 µm) 3D neural cultures supported by passive diffusion, cell densities =104 cells mm-3), continuous medium perfusion at 2.0-11.0 µL min-1 improved viability compared to non-perfused cultures (p death and matrix degradation. In perfused cultures, survival was dependent on proximity to the perfusion source at 2.00-6.25 µL min-1 (p 90% viability in both neuronal cultures and neuronal-astrocytic co-cultures. This work demonstrates the utility of forced interstitial convection in improving the survival of high cell density 3D engineered neural constructs and may aid in the development of novel tissue-engineered systems reconstituting 3D cell-cell/cell-matrix interactions.

  20. Inference for High-dimensional Differential Correlation Matrices.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  1. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  2. The literary uses of high-dimensional space

    Directory of Open Access Journals (Sweden)

    Ted Underwood

    2015-12-01

    Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.

  3. Stochastic cooling

    International Nuclear Information System (INIS)

    Bisognano, J.; Leemann, C.

    1982-03-01

    Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron

  4. Improved stochastic resonance algorithm for enhancement of signal-to-noise ratio of high-performance liquid chromatographic signal

    International Nuclear Information System (INIS)

    Xie Shaofei; Xiang Bingren; Deng Haishan; Xiang Suyun; Lu Jun

    2007-01-01

    Based on the theory of stochastic resonance, an improved stochastic resonance algorithm with a new criterion for optimizing system parameters to enhance signal-to-noise ratio (SNR) of HPLC/UV chromatographic signal for trace analysis was presented in this study. Compared with the conventional criterion in stochastic resonance, the proposed one can ensure satisfactory SNR as well as good peak shape of chromatographic peak in output signal. Application of the criterion to experimental weak signals of HPLC/UV was investigated and the results showed an excellent quantitative relationship between different concentrations and responses

  5. Stochastic cooling at Fermilab

    International Nuclear Information System (INIS)

    Marriner, J.

    1986-08-01

    The topics discussed are the stochastic cooling systems in use at Fermilab and some of the techniques that have been employed to meet the particular requirements of the anti-proton source. Stochastic cooling at Fermilab became of paramount importance about 5 years ago when the anti-proton source group at Fermilab abandoned the electron cooling ring in favor of a high flux anti-proton source which relied solely on stochastic cooling to achieve the phase space densities necessary for colliding proton and anti-proton beams. The Fermilab systems have constituted a substantial advance in the techniques of cooling including: large pickup arrays operating at microwave frequencies, extensive use of cryogenic techniques to reduce thermal noise, super-conducting notch filters, and the development of tools for controlling and for accurately phasing the system

  6. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  7. Stochastic thermodynamics

    Science.gov (United States)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  8. Quality and efficiency in high dimensional Nearest neighbor search

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2009-01-01

    Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.

  9. Feature selection for high-dimensional integrated data

    KAUST Repository

    Zheng, Charles; Schwartz, Scott; Chapkin, Robert S.; Carroll, Raymond J.; Ivanov, Ivan

    2012-01-01

    Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.

  10. Feature selection for high-dimensional integrated data

    KAUST Repository

    Zheng, Charles

    2012-04-26

    Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.

  11. Numerical Simulation of the Heston Model under Stochastic Correlation

    Directory of Open Access Journals (Sweden)

    Long Teng

    2017-12-01

    Full Text Available Stochastic correlation models have become increasingly important in financial markets. In order to be able to price vanilla options in stochastic volatility and correlation models, in this work, we study the extension of the Heston model by imposing stochastic correlations driven by a stochastic differential equation. We discuss the efficient algorithms for the extended Heston model by incorporating stochastic correlations. Our numerical experiments show that the proposed algorithms can efficiently provide highly accurate results for the extended Heston by including stochastic correlations. By investigating the effect of stochastic correlations on the implied volatility, we find that the performance of the Heston model can be proved by including stochastic correlations.

  12. Three-dimensional laparoscopy vs 2-dimensional laparoscopy with high-definition technology for abdominal surgery

    DEFF Research Database (Denmark)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian

    2017-01-01

    BACKGROUND: This systematic review investigates newer generation 3-dimensional (3D) laparoscopy vs 2-dimensional (2D) laparoscopy in terms of error rating, performance time, and subjective assessment as early comparisons have shown contradictory results due to technological shortcomings. DATA...... Central Register of Controlled Trials database. CONCLUSIONS: Of 643 articles, 13 RCTs were included, of which 2 were clinical trials. Nine of 13 trials (69%) and 10 of 13 trials (77%) found a significant reduction in performance time and error, respectively, with the use of 3D-laparoscopy. Overall, 3D......-laparoscopy was found to be superior or equal to 2D-laparoscopy. All trials featuring subjective evaluation found a superiority of 3D-laparoscopy. More clinical RCTs are still awaited for the convincing results to be reproduced....

  13. Stochastic Stabilityfor Contracting Lorenz Maps and Flows

    Science.gov (United States)

    Metzger, R. J.

    In a previous work [M], we proved the existence of absolutely continuous invariant measures for contracting Lorenz-like maps, and constructed Sinai-Ruelle-Bowen measures f or the flows that generate them. Here, we prove stochastic stability for such one-dimensional maps and use this result to prove that the corresponding flows generating these maps are stochastically stable under small diffusion-type perturbations, even though, as shown by Rovella [Ro], they are persistent only in a measure theoretical sense in a parameter space. For the one-dimensional maps we also prove strong stochastic stability in the sense of Baladi and Viana[BV].

  14. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  15. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  16. Dimensionality analysis of multiparticle production at high energies

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1989-01-01

    An algorithm of analysis of multiparticle final states is offered. By the Renyi dimensionalities, which were calculated according to experimental data, though it were hadron distribution over the rapidity intervals or particle distribution in an N-dimensional momentum space, we can judge about the degree of correlation of particles, separate the momentum space projections and areas where the probability measure singularities are observed. The method is tested in a series of calculations with samples of fractal object points and with samples obtained by means of different generators of pseudo- and quasi-random numbers. 27 refs.; 11 figs

  17. Problems of high temperature superconductivity in three-dimensional systems

    Energy Technology Data Exchange (ETDEWEB)

    Geilikman, B T

    1973-01-01

    A review is given of more recent papers on this subject. These papers have dealt mainly with two-dimensional systems. The present paper extends the treatment to three-dimensional systems, under the following headings: systems with collective electrons of one group and localized electrons of another group (compounds of metals with non-metals-dielectrics, organic substances, undoped semiconductors, molecular crystals); experimental investigations of superconducting compounds of metals with organic compounds, dielectrics, semiconductors, and semi-metals; and systems with two or more groups of collective electrons. Mechanics are considered and models are derived. 86 references.

  18. Stochastic processes

    CERN Document Server

    Borodin, Andrei N

    2017-01-01

    This book provides a rigorous yet accessible introduction to the theory of stochastic processes. A significant part of the book is devoted to the classic theory of stochastic processes. In turn, it also presents proofs of well-known results, sometimes together with new approaches. Moreover, the book explores topics not previously covered elsewhere, such as distributions of functionals of diffusions stopped at different random times, the Brownian local time, diffusions with jumps, and an invariance principle for random walks and local times. Supported by carefully selected material, the book showcases a wealth of examples that demonstrate how to solve concrete problems by applying theoretical results. It addresses a broad range of applications, focusing on concrete computational techniques rather than on abstract theory. The content presented here is largely self-contained, making it suitable for researchers and graduate students alike.

  19. Montmorency Cherries Reduce the Oxidative Stress and Inflammatory Responses to Repeated Days High-Intensity Stochastic Cycling

    Directory of Open Access Journals (Sweden)

    Phillip G. Bell

    2014-02-01

    Full Text Available This investigation examined the impact of Montmorency tart cherry concentrate (MC on physiological indices of oxidative stress, inflammation and muscle damage across 3 days simulated road cycle racing. Trained cyclists (n = 16 were divided into equal groups and consumed 30 mL of MC or placebo (PLA, twice per day for seven consecutive days. A simulated, high-intensity, stochastic road cycling trial, lasting 109 min, was completed on days 5, 6 and 7. Oxidative stress and inflammation were measured from blood samples collected at baseline and immediately pre- and post-trial on days 5, 6 and 7. Analyses for lipid hydroperoxides (LOOH, interleukin-6 (IL-6, tumor necrosis factor-alpha (TNF-α, interleukin-8 (IL-8, interleukin-1-beta (IL-1-β, high-sensitivity C-reactive protein (hsCRP and creatine kinase (CK were conducted. LOOH (p < 0.01, IL-6 (p < 0.05 and hsCRP (p < 0.05 responses to trials were lower in the MC group versus PLA. No group or interaction effects were found for the other markers. The attenuated oxidative and inflammatory responses suggest MC may be efficacious in combating post-exercise oxidative and inflammatory cascades that can contribute to cellular disruption. Additionally, we demonstrate direct application for MC in repeated days cycling and conceivably other sporting scenario’s where back-to-back performances are required.

  20. Stochastic quantization and topological theories

    International Nuclear Information System (INIS)

    Fainberg, V.Y.; Subbotin, A.V.; Kuznetsov, A.N.

    1992-01-01

    In the last two years topological quantum field theories (TQFT) have attached much attention. This paper reports that from the very beginning it was realized that due to a peculiar BRST-like symmetry these models admitted so-called Nicolai mapping: the Nicolai variables, in terms of which actions of the theories become gaussian, are nothing but (anti-) selfduality conditions or their generalizations. This fact became a starting point in the quest of possible stochastic interpretation to topological field theories. The reasons behind were quite simple and included, in particular, the well-known relations between stochastic processes and supersymmetry. The main goal would have been achieved, if it were possible to construct stochastic processes governed by Langevin or Fokker-Planck equations in a real Euclidean time leading to TQFT's path integrals (equivalently: to reformulate TQFTs as non-equilibrium phase dynamics of stochastic processes). Further on, if it would appear that these processes correspond to the stochastic quantization of theories of some definite kind, one could expect (d + 1)-dimensional TQFTs to share some common properties with d-dimensional ones

  1. Model-based Estimation of High Frequency Jump Diffusions with Microstructure Noise and Stochastic Volatility

    NARCIS (Netherlands)

    Bos, Charles S.

    2008-01-01

    When analysing the volatility related to high frequency financial data, mostly non-parametric approaches based on realised or bipower variation are applied. This article instead starts from a continuous time diffusion model and derives a parametric analog at high frequency for it, allowing

  2. Matrix correlations for high-dimensional data: The modified RV-coefficient

    NARCIS (Netherlands)

    Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van

    2009-01-01

    Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they

  3. A Fokker-Planck treatment of stochastic particle motion within the framework of a fully coupled 6-dimensional formalism for electron-positron storage rings including classical spin motion in linear approximation

    International Nuclear Information System (INIS)

    Barber, D.P.; Heinemann, K.; Mais, H.; Ripken, G.

    1991-12-01

    In the following report we investigate stochastic particle motion in electron-positron storage ring in the framework of a Fokker-Planck treatment. The motion is described by using the canonical variables χ, p χ , z, p z , σ = s - cxt, p σ = ΔE/E 0 of the fully six-dimensional formalism. Thus synchrotron- and betatron-oscillations are treated simultaneously taking into account all kinds of coupling (synchro-betatron coupling and the coupling of the betatron oscillations by skew quadrupoles and solenoids). In order to set up the Fokker-Planck equation, action-angle variables of the linear coupled motion are introduced. The averaged dimensions of the bunch, resulting from radiation damping of the synchro-betatron oscillations and from an excitation of these oscillations by quantum fluctuations, are calculated by solving the Fokker-Planck equation. The surfaces of constant density in the six-dimensional phase space, given by six-dimensional ellipsoids, are determined. It is shown that the motion of such an ellipsoid under the influence of external fields can be described by six generating orbit vectors which may be combined into a six-dimenional matrix B(s). This 'bunch-shape matrix', B(s), contains complete information about the configuration of the bunch. Classical spin diffusion in linear approximation has also been included so that the dependence of the polarization vector on the orbital phase space coordinates can be studied and another derivation of the linearized depolarization time obtained. (orig.)

  4. Symmetries of stochastic differential equations: A geometric approach

    Energy Technology Data Exchange (ETDEWEB)

    De Vecchi, Francesco C., E-mail: francesco.devecchi@unimi.it; Ugolini, Stefania, E-mail: stefania.ugolini@unimi.it [Dipartimento di Matematica, Università degli Studi di Milano, via Saldini 50, Milano (Italy); Morando, Paola, E-mail: paola.morando@unimi.it [DISAA, Università degli Studi di Milano, via Celoria 2, Milano (Italy)

    2016-06-15

    A new notion of stochastic transformation is proposed and applied to the study of both weak and strong symmetries of stochastic differential equations (SDEs). The correspondence between an algebra of weak symmetries for a given SDE and an algebra of strong symmetries for a modified SDE is proved under suitable regularity assumptions. This general approach is applied to a stochastic version of a two dimensional symmetric ordinary differential equation and to the case of two dimensional Brownian motion.

  5. High Weak Order Methods for Stochastic Differential Equations Based on Modified Equations

    KAUST Repository

    Abdulle, Assyr; Cohen, David; Vilmart, Gilles; Zygalakis, Konstantinos C.

    2012-01-01

    © 2012 Society for Industrial and Applied Mathematics. Inspired by recent advances in the theory of modified differential equations, we propose a new methodology for constructing numerical integrators with high weak order for the time integration

  6. Stochastic analysis for Poisson point processes Malliavin calculus, Wiener-Itô chaos expansions and stochastic geometry

    CERN Document Server

    Peccati, Giovanni

    2016-01-01

    Stochastic geometry is the branch of mathematics that studies geometric structures associated with random configurations, such as random graphs, tilings and mosaics. Due to its close ties with stereology and spatial statistics, the results in this area are relevant for a large number of important applications, e.g. to the mathematical modeling and statistical analysis of telecommunication networks, geostatistics and image analysis. In recent years – due mainly to the impetus of the authors and their collaborators – a powerful connection has been established between stochastic geometry and the Malliavin calculus of variations, which is a collection of probabilistic techniques based on the properties of infinite-dimensional differential operators. This has led in particular to the discovery of a large number of new quantitative limit theorems for high-dimensional geometric objects. This unique book presents an organic collection of authoritative surveys written by the principal actors in this rapidly evolvi...

  7. Stochastic geometry and its applications

    CERN Document Server

    Chiu, Sung Nok; Kendall, Wilfrid S; Mecke, Joseph

    2013-01-01

    An extensive update to a classic text Stochastic geometry and spatial statistics play a fundamental role in many modern branches of physics, materials sciences, engineering, biology and environmental sciences. They offer successful models for the description of random two- and three-dimensional micro and macro structures and statistical methods for their analysis. The previous edition of this book has served as the key reference in its field for over 18 years and is regarded as the best treatment of the subject of stochastic geometry, both as a subject with vital a

  8. Dimensional consistency achieved in high-performance synchronizing hubs

    International Nuclear Information System (INIS)

    Garcia, P.; Campos, M.; Torralba, M.

    2013-01-01

    The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via ( 2 P2S ) has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and process. (Author) 21 refs.

  9. Two-dimensional impurity transport calculations for a high recycling divertor

    International Nuclear Information System (INIS)

    Brooks, J.N.

    1986-04-01

    Two dimensional analysis of impurity transport in a high recycling divertor shows asymmetric particle fluxes to the divertor plate, low helium pumping efficiency, and high scrapeoff zone shielding for sputtered impurities

  10. Dimensional consistency achieved in high-performance synchronizing hubs

    Directory of Open Access Journals (Sweden)

    García, P.

    2013-02-01

    Full Text Available The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via (“2P2S” has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and process.

    Las tolerancias en componentes fabricados para la industria del automóvil son tan estrechas que cualquier modificación en las variables del proceso puede provocar que no se cumplan. Una disminución de las tolerancias dimensionales, puede significar una mejora en las propiedades de las piezas. Dependiendo de los requerimientos dimensionales y del material, distintas rutas de procesado pueden seguirse para encontrar un método de procesado robusto, que minimice costes y maximice la capacidad del proceso. En los últimos años, la tolerancia dimensional se ha ajustado gracias a métodos de procesado como el doble prensado/doble sinterizado (“2P2S”, método de gran precisión para conseguir estrechas tolerancias. En este trabajo, se muestra que los parámetros de procesado

  11. Optimal Control for Stochastic Delay Evolution Equations

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Qingxin, E-mail: mqx@hutc.zj.cn [Huzhou University, Department of Mathematical Sciences (China); Shen, Yang, E-mail: skyshen87@gmail.com [York University, Department of Mathematics and Statistics (Canada)

    2016-08-15

    In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we apply stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.

  12. Stochastic kinetics

    International Nuclear Information System (INIS)

    Colombino, A.; Mosiello, R.; Norelli, F.; Jorio, V.M.; Pacilio, N.

    1975-01-01

    A nuclear system kinetics is formulated according to a stochastic approach. The detailed probability balance equations are written for the probability of finding the mixed population of neutrons and detected neutrons, i.e. detectrons, at a given level for a given instant of time. Equations are integrated in search of a probability profile: a series of cases is analyzed through a progressive criterium. It tends to take into account an increasing number of physical processes within the chosen model. The most important contribution is that solutions interpret analytically experimental conditions of equilibrium (moise analysis) and non equilibrium (pulsed neutron measurements, source drop technique, start up procedures)

  13. Stochastic Jeux

    Directory of Open Access Journals (Sweden)

    Romanu Ekaterini

    2006-01-01

    Full Text Available This article shows the similarities between Claude Debussy’s and Iannis Xenakis’ philosophy of music and work, in particular the formers Jeux and the latter’s Metastasis and the stochastic works succeeding it, which seem to proceed parallel (with no personal contact to what is perceived as the evolution of 20th century Western music. Those two composers observed the dominant (German tradition as outsiders, and negated some of its elements considered as constant or natural by "traditional" innovators (i.e. serialists: the linearity of musical texture, its form and rhythm.

  14. Optimal Stochastic Control Problem for General Linear Dynamical Systems in Neuroscience

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2017-01-01

    Full Text Available This paper considers a d-dimensional stochastic optimization problem in neuroscience. Suppose the arm’s movement trajectory is modeled by high-order linear stochastic differential dynamic system in d-dimensional space, the optimal trajectory, velocity, and variance are explicitly obtained by using stochastic control method, which allows us to analytically establish exact relationships between various quantities. Moreover, the optimal trajectory is almost a straight line for a reaching movement; the optimal velocity bell-shaped and the optimal variance are consistent with the experimental Fitts law; that is, the longer the time of a reaching movement, the higher the accuracy of arriving at the target position, and the results can be directly applied to designing a reaching movement performed by a robotic arm in a more general environment.

  15. Criticality assessment for prismatic high temperature reactors by fuel stochastic Monte Carlo modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zakova, Jitka [Department of Nuclear and Reactor Physics, Royal Institute of Technology, KTH, Roslagstullsbacken 21, S-10691 Stockholm (Sweden)], E-mail: jitka.zakova@neutron.kth.se; Talamo, Alberto [Nuclear Engineering Division, Argonne National Laboratory, ANL, 9700 South Cass Avenue, Argonne, IL 60439 (United States)], E-mail: alby@anl.gov

    2008-05-15

    Modeling of prismatic high temperature reactors requires a high precision description due to the triple heterogeneity of the core and also to the random distribution of fuel particles inside the fuel pins. On the latter issue, even with the most advanced Monte Carlo techniques, some approximation often arises while assessing the criticality level: first, a regular lattice of TRISO particles inside the fuel pins and, second, the cutting of TRISO particles by the fuel boundaries. We utilized two of the most accurate Monte Codes: MONK and MCNP, which are both used for licensing nuclear power plants in United Kingdom and in the USA, respectively, to evaluate the influence of the two previous approximations on estimating the criticality level of the Gas Turbine Modular Helium Reactor. The two codes exactly shared the same geometry and nuclear data library, ENDF/B, and only modeled different lattices of TRISO particles inside the fuel pins. More precisely, we investigated the difference between a regular lattice that cuts TRISO particles and a random lattice that axially repeats a region containing over 3000 non-cut particles. We have found that both Monte Carlo codes provide similar excesses of reactivity, provided that they share the same approximations.

  16. High-resolution stochastic integrated thermal–electrical domestic demand model

    International Nuclear Information System (INIS)

    McKenna, Eoghan; Thomson, Murray

    2016-01-01

    Highlights: • A major new version of CREST’s demand model is presented. • Simulates electrical and thermal domestic demands at high-resolution. • Integrated structure captures appropriate time-coincidence of variables. • Suitable for low-voltage network and urban energy analyses. • Open-source development in Excel VBA freely available for download. - Abstract: This paper describes the extension of CREST’s existing electrical domestic demand model into an integrated thermal–electrical demand model. The principle novelty of the model is its integrated structure such that the timing of thermal and electrical output variables are appropriately correlated. The model has been developed primarily for low-voltage network analysis and the model’s ability to account for demand diversity is of critical importance for this application. The model, however, can also serve as a basis for modelling domestic energy demands within the broader field of urban energy systems analysis. The new model includes the previously published components associated with electrical demand and generation (appliances, lighting, and photovoltaics) and integrates these with an updated occupancy model, a solar thermal collector model, and new thermal models including a low-order building thermal model, domestic hot water consumption, thermostat and timer controls and gas boilers. The paper reviews the state-of-the-art in high-resolution domestic demand modelling, describes the model, and compares its output with three independent validation datasets. The integrated model remains an open-source development in Excel VBA and is freely available to download for users to configure and extend, or to incorporate into other models.

  17. Fundamentals of stochastic nature sciences

    CERN Document Server

    Klyatskin, Valery I

    2017-01-01

    This book addresses the processes of stochastic structure formation in two-dimensional geophysical fluid dynamics based on statistical analysis of Gaussian random fields, as well as stochastic structure formation in dynamic systems with parametric excitation of positive random fields f(r,t) described by partial differential equations. Further, the book considers two examples of stochastic structure formation in dynamic systems with parametric excitation in the presence of Gaussian pumping. In dynamic systems with parametric excitation in space and time, this type of structure formation either happens – or doesn’t! However, if it occurs in space, then this almost always happens (exponentially quickly) in individual realizations with a unit probability. In the case considered, clustering of the field f(r,t) of any nature is a general feature of dynamic fields, and one may claim that structure formation is the Law of Nature for arbitrary random fields of such type. The study clarifies the conditions under wh...

  18. Interface between path and orbital angular momentum entanglement for high-dimensional photonic quantum information.

    Science.gov (United States)

    Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton

    2014-07-30

    Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.

  19. Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids

    NARCIS (Netherlands)

    bin Zubair, H.; Oosterlee, C.E.; Wienands, R.

    2006-01-01

    This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We

  20. STOCHASTIC DESCRIPTION OF THE HIGH-FREQUENCY CONTENT OF DAILY SUNSPOTS AND EVIDENCE FOR REGIME CHANGES

    International Nuclear Information System (INIS)

    Shapoval, A.; Le Mouël, J.-L.; Courtillot, V.; Shnirman, M.

    2015-01-01

    The irregularity index λ is applied to the high-frequency content of daily sunspot numbers ISSN. This λ is a modification of the standard maximal Lyapunov exponent. It is computed here as a function of embedding dimension m, within four-year time windows centered at the maxima of Schwabe cycles. The λ(m) curves form separate clusters (pre-1923 and post-1933). This supports a regime transition and narrows its occurrence to cycle 16, preceding the growth of activity leading to the Modern Maximum. The two regimes are reproduced by a simple autoregressive process AR(1), with the mean of Poisson noise undergoing 11 yr modulation. The autocorrelation a of the process (linked to sunspot lifetime) is a ≈ 0.8 for 1850-1923 and ≈0.95 for 1933-2013. The AR(1) model suggests that groups of spots appear with a Poisson rate and disappear at a constant rate. We further applied the irregularity index to the daily sunspot group number series for the northern and southern hemispheres, provided by the Greenwich Royal Observatory (RGO), in order to study a possible desynchronization. Correlations between the north and south λ(m) curves vary quite strongly with time and indeed show desynchronization. This may reflect a slow change in the dimension of an underlying dynamical system. The ISSN and RGO series of group numbers do not imply an identical mechanism, but both uncover a regime change at a similar time. Computation of the irregularity index near the maximum of cycle 24 will help in checking whether yet another regime change is under way

  1. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  2. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  3. Two-dimensional computer simulation of high intensity proton beams

    CERN Document Server

    Lapostolle, Pierre M

    1972-01-01

    A computer program has been developed which simulates the two- dimensional transverse behaviour of a proton beam in a focusing channel. The model is represented by an assembly of a few thousand 'superparticles' acted upon by their own self-consistent electric field and an external focusing force. The evolution of the system is computed stepwise in time by successively solving Poisson's equation and Newton's law of motion. Fast Fourier transform techniques are used for speed in the solution of Poisson's equation, while extensive area weighting is utilized for the accurate evaluation of electric field components. A computer experiment has been performed on the CERN CDC 6600 computer to study the nonlinear behaviour of an intense beam in phase space, showing under certain circumstances a filamentation due to space charge and an apparent emittance growth. (14 refs).

  4. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    OpenAIRE

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data)...

  5. Stochastic modeling

    CERN Document Server

    Lanchier, Nicolas

    2017-01-01

    Three coherent parts form the material covered in this text, portions of which have not been widely covered in traditional textbooks. In this coverage the reader is quickly introduced to several different topics enriched with 175 exercises which focus on real-world problems. Exercises range from the classics of probability theory to more exotic research-oriented problems based on numerical simulations. Intended for graduate students in mathematics and applied sciences, the text provides the tools and training needed to write and use programs for research purposes. The first part of the text begins with a brief review of measure theory and revisits the main concepts of probability theory, from random variables to the standard limit theorems. The second part covers traditional material on stochastic processes, including martingales, discrete-time Markov chains, Poisson processes, and continuous-time Markov chains. The theory developed is illustrated by a variety of examples surrounding applications such as the ...

  6. Stochastic synchronization of coupled neural networks with intermittent control

    International Nuclear Information System (INIS)

    Yang Xinsong; Cao Jinde

    2009-01-01

    In this Letter, we study the exponential stochastic synchronization problem for coupled neural networks with stochastic noise perturbations. Based on Lyapunov stability theory, inequality techniques, the properties of Weiner process, and adding different intermittent controllers, several sufficient conditions are obtained to ensure exponential stochastic synchronization of coupled neural networks with or without coupling delays under stochastic perturbations. These stochastic synchronization criteria are expressed in terms of several lower-dimensional linear matrix inequalities (LMIs) and can be easily verified. Moreover, the results of this Letter are applicable to both directed and undirected weighted networks. A numerical example and its simulations are offered to show the effectiveness of our new results.

  7. Stochastic Heterogeneity Mapping around a Mediterranean salt lens

    Directory of Open Access Journals (Sweden)

    G. G. Buffett

    2010-03-01

    Full Text Available We present the first application of Stochastic Heterogeneity Mapping based on the band-limited von Kármán function to a seismic reflection stack of a Mediterranean water eddy (meddy, a large salt lens of Mediterranean water. This process extracts two stochastic parameters directly from the reflectivity field of the seismic data: the Hurst number, which ranges from 0 to 1, and the correlation length (scale length. Lower Hurst numbers represent a richer range of high wavenumbers and correspond to a broader range of heterogeneity in reflection events. The Hurst number estimate for the top of the meddy (0.39 compares well with recent theoretical work, which required values between 0.25 and 0.5 to model internal wave surfaces in open ocean conditions based on simulating a Garrett-Munk spectrum (GM76 slope of −2. The scale lengths obtained do not fit as well to seismic reflection events as those used in other studies to model internal waves. We suggest two explanations for this discrepancy: (1 due to the fact that the stochastic parameters are derived from the reflectivity field rather than the impedance field the estimated scale lengths may be underestimated, as has been reported; and (2 because the meddy seismic image is a two-dimensional slice of a complex and dynamic three-dimensional object, the derived scale lengths are biased to the direction of flow. Nonetheless, varying stochastic parameters, which correspond to different spectral slopes in the Garrett-Munk spectrum (horizontal wavenumber spectrum, can provide an estimate of different internal wave scales from seismic data alone. We hence introduce Stochastic Heterogeneity Mapping as a novel tool in physical oceanography.

  8. Mitigating the Insider Threat Using High-Dimensional Search and Modeling

    National Research Council Canada - National Science Library

    Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago

    2006-01-01

    In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...

  9. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)

    2015-01-15

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  10. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data

  11. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    Science.gov (United States)

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  12. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2010-01-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii

  13. Distribution of high-dimensional entanglement via an intra-city free-space link.

    Science.gov (United States)

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-07-24

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.

  14. Two dimensional simulation of high power laser-surface interaction

    International Nuclear Information System (INIS)

    Goldman, S.R.; Wilke, M.D.; Green, R.E.L.; Johnson, R.P.; Busch, G.E.

    1998-01-01

    For laser intensities in the range of 10 8 --10 9 W/cm 2 , and pulse lengths of order 10 microsec or longer, the authors have modified the inertial confinement fusion code Lasnex to simulate gaseous and some dense material aspects of the laser-matter interaction. The unique aspect of their treatment consists of an ablation model which defines a dense material-vapor interface and then calculates the mass flow across this interface. The model treats the dense material as a rigid two-dimensional mass and heat reservoir suppressing all hydrodynamic motion in the dense material. The computer simulations and additional post-processors provide predictions for measurements including impulse given to the target, pressures at the target interface, electron temperatures and densities in the vapor-plasma plume region, and emission of radiation from the target. The authors will present an analysis of some relatively well diagnosed experiments which have been useful in developing their modeling. The simulations match experimentally obtained target impulses, pressures at the target surface inside the laser spot, and radiation emission from the target to within about 20%. Hence their simulational technique appears to form a useful basis for further investigation of laser-surface interaction in this intensity, pulse-width range. This work is useful in many technical areas such as materials processing

  15. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  16. STOCHASTIC FLOWS OF MAPPINGS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the stochastic flow of mappings generated by a Feller convolution semigroup on a compact metric space is studied. This kind of flow is the generalization of superprocesses of stochastic flows and stochastic diffeomorphism induced by the strong solutions of stochastic differential equations.

  17. Stochastic processes and filtering theory

    CERN Document Server

    Jazwinski, Andrew H

    1970-01-01

    This unified treatment of linear and nonlinear filtering theory presents material previously available only in journals, and in terms accessible to engineering students. Its sole prerequisites are advanced calculus, the theory of ordinary differential equations, and matrix analysis. Although theory is emphasized, the text discusses numerous practical applications as well.Taking the state-space approach to filtering, this text models dynamical systems by finite-dimensional Markov processes, outputs of stochastic difference, and differential equations. Starting with background material on probab

  18. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  19. Stochastic spatio-temporal modelling of African swine fever spread in the European Union during the high risk period.

    Science.gov (United States)

    Nigsch, Annette; Costard, Solenne; Jones, Bryony A; Pfeiffer, Dirk U; Wieland, Barbara

    2013-03-01

    African swine fever (ASF) is a notifiable viral pig disease with high mortality and serious socio-economic consequences. Since ASF emerged in Georgia in 2007 the disease has spread to several neighbouring countries and cases have been detected in areas bordering the European Union (EU). It is uncertain how fast the virus would be able to spread within the unrestricted European trading area if it were introduced into the EU. This project therefore aimed to develop a model for the spread of ASF within and between the 27 Member States (MS) of the EU during the high risk period (HRP) and to identify MS during that period would most likely contribute to ASF spread ("super-spreaders") or MS that would most likely receive cases from other MS ("super-receivers"). A stochastic spatio-temporal state-transition model using simulated individual farm records was developed to assess silent ASF virus spread during different predefined HRPs of 10-60 days duration. Infection was seeded into farms of different pig production types in each of the 27 MS. Direct pig-to-pig transmission and indirect transmission routes (pig transport lorries and professional contacts) were considered the main pathways during the early stages of an epidemic. The model was parameterised using data collated from EUROSTAT, TRACES, a questionnaire sent to MS, and the scientific literature. Model outputs showed that virus circulation was generally limited to 1-2 infected premises per outbreak (95% IQR: 1-4; maximum: 10) with large breeder farms as index case resulting in most infected premises. Seven MS caused between-MS spread due to intra-Community trade during the first 10 days after seeding infection. For a HRP of 60 days from virus introduction, movements of infected pigs will originate at least once from 16 MS, with 6 MS spreading ASF in more than 10% of iterations. Two thirds of all intra-Community spread was linked to six trade links only. Denmark, the Netherlands, Lithuania and Latvia were identified

  20. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    Energy Technology Data Exchange (ETDEWEB)

    Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.

  1. High-dimensional orbital angular momentum entanglement concentration based on Laguerre–Gaussian mode selection

    International Nuclear Information System (INIS)

    Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang

    2013-01-01

    Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)

  2. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    Science.gov (United States)

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  3. Stochastic stacking without filters

    International Nuclear Information System (INIS)

    Johnson, R.P.; Marriner, J.

    1982-12-01

    The rate of accumulation of antiprotons is a critical factor in the design of p anti p colliders. A design of a system to accumulate higher anti p fluxes is presented here which is an alternative to the schemes used at the CERN AA and in the Fermilab Tevatron I design. Contrary to these stacking schemes, which use a system of notch filters to protect the dense core of antiprotons from the high power of the stack tail stochastic cooling, an eddy current shutter is used to protect the core in the region of the stack tail cooling kicker. Without filters one can have larger cooling bandwidths, better mixing for stochastic cooling, and easier operational criteria for the power amplifiers. In the case considered here a flux of 1.4 x 10 8 per sec is achieved with a 4 to 8 GHz bandwidth

  4. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Hongchao Song

    2017-01-01

    Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  5. Two-Dimensional High Definition Versus Three-Dimensional Endoscopy in Endonasal Skull Base Surgery: A Comparative Preclinical Study.

    Science.gov (United States)

    Rampinelli, Vittorio; Doglietto, Francesco; Mattavelli, Davide; Qiu, Jimmy; Raffetti, Elena; Schreiber, Alberto; Villaret, Andrea Bolzoni; Kucharczyk, Walter; Donato, Francesco; Fontanella, Marco Maria; Nicolai, Piero

    2017-09-01

    Three-dimensional (3D) endoscopy has been recently introduced in endonasal skull base surgery. Only a relatively limited number of studies have compared it to 2-dimensional, high definition technology. The objective was to compare, in a preclinical setting for endonasal endoscopic surgery, the surgical maneuverability of 2-dimensional, high definition and 3D endoscopy. A group of 68 volunteers, novice and experienced surgeons, were asked to perform 2 tasks, namely simulating grasping and dissection surgical maneuvers, in a model of the nasal cavities. Time to complete the tasks was recorded. A questionnaire to investigate subjective feelings during tasks was filled by each participant. In 25 subjects, the surgeons' movements were continuously tracked by a magnetic-based neuronavigator coupled with dedicated software (ApproachViewer, part of GTx-UHN) and the recorded trajectories were analyzed by comparing jitter, sum of square differences, and funnel index. Total execution time was significantly lower with 3D technology (P < 0.05) in beginners and experts. Questionnaires showed that beginners preferred 3D endoscopy more frequently than experts. A minority (14%) of beginners experienced discomfort with 3D endoscopy. Analysis of jitter showed a trend toward increased effectiveness of surgical maneuvers with 3D endoscopy. Sum of square differences and funnel index analyses documented better values with 3D endoscopy in experts. In a preclinical setting for endonasal skull base surgery, 3D technology appears to confer an advantage in terms of time of execution and precision of surgical maneuvers. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Pore-Scale Hydrodynamics in a Progressively Bioclogged Three-Dimensional Porous Medium: 3-D Particle Tracking Experiments and Stochastic Transport Modeling

    Science.gov (United States)

    Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.

    2018-03-01

    Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.

  7. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  8. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    Science.gov (United States)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  9. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  10. HASE: Framework for efficient high-dimensional association analyses

    NARCIS (Netherlands)

    G.V. Roshchupkin (Gennady); H.H.H. Adams (Hieab); M.W. Vernooij (Meike); A. Hofman (Albert); C.M. van Duijn (Cornelia); M.K. Ikram (Kamran); W.J. Niessen (Wiro)

    2016-01-01

    textabstractHigh-throughput technology can now provide rich information on a person's biological makeup and environmental surroundings. Important discoveries have been made by relating these data to various health outcomes in fields such as genomics, proteomics, and medical imaging. However,

  11. HASE : Framework for efficient high-dimensional association analyses

    NARCIS (Netherlands)

    Roshchupkin, G. V.; Adams, H; Vernooij, Meike W.; Hofman, A; Van Duijn, C. M.; Ikram, M. Arfan; Niessen, W.J.

    2016-01-01

    High-throughput technology can now provide rich information on a person's biological makeup and environmental surroundings. Important discoveries have been made by relating these data to various health outcomes in fields such as genomics, proteomics, and medical imaging. However,

  12. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  13. Secure data storage by three-dimensional absorbers in highly scattering volume medium

    International Nuclear Information System (INIS)

    Matoba, Osamu; Matsuki, Shinichiro; Nitta, Kouichi

    2008-01-01

    A novel data storage in a volume medium with highly scattering coefficient is proposed for data security application. Three-dimensional absorbers are used as data. These absorbers can not be measured by interferometer when the scattering in a volume medium is strong enough. We present a method to reconstruct three-dimensional absorbers and present numerical results to show the effectiveness of the proposed data storage.

  14. High Speed Water Sterilization Using One-Dimensional Nanostructures

    KAUST Repository

    Schoen, David T.; Schoen, Alia P.; Hu, Liangbing; Kim, Han Sun; Heilshorn, Sarah C.; Cui, Yi

    2010-01-01

    The removal of bacteria and other organisms from water is an extremely important process, not only for drinking and sanitation but also industrially as biofouling is a commonplace and serious problem. We here present a textile based multiscale device for the high speed electrical sterilization of water using silver nanowires, carbon nanotubes, and cotton. This approach, which combines several materials spanning three very different length scales with simple dying based fabrication, makes a gravity fed device operating at 100000 L/(h m2) which can inactivate >98% of bacteria with only several seconds of total incubation time. This excellent performance is enabled by the use of an electrical mechanism rather than size exclusion, while the very high surface area of the device coupled with large electric field concentrations near the silver nanowire tips allows for effective bacterial inactivation. © 2010 American Chemical Society.

  15. High Speed Water Sterilization Using One-Dimensional Nanostructures

    KAUST Repository

    Schoen, David T.

    2010-09-08

    The removal of bacteria and other organisms from water is an extremely important process, not only for drinking and sanitation but also industrially as biofouling is a commonplace and serious problem. We here present a textile based multiscale device for the high speed electrical sterilization of water using silver nanowires, carbon nanotubes, and cotton. This approach, which combines several materials spanning three very different length scales with simple dying based fabrication, makes a gravity fed device operating at 100000 L/(h m2) which can inactivate >98% of bacteria with only several seconds of total incubation time. This excellent performance is enabled by the use of an electrical mechanism rather than size exclusion, while the very high surface area of the device coupled with large electric field concentrations near the silver nanowire tips allows for effective bacterial inactivation. © 2010 American Chemical Society.

  16. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  17. Stochastic resonance

    International Nuclear Information System (INIS)

    Wellens, Thomas; Shatokhin, Vyacheslav; Buchleitner, Andreas

    2004-01-01

    We are taught by conventional wisdom that the transmission and detection of signals is hindered by noise. However, during the last two decades, the paradigm of stochastic resonance (SR) proved this assertion wrong: indeed, addition of the appropriate amount of noise can boost a signal and hence facilitate its detection in a noisy environment. Due to its simplicity and robustness, SR has been implemented by mother nature on almost every scale, thus attracting interdisciplinary interest from physicists, geologists, engineers, biologists and medical doctors, who nowadays use it as an instrument for their specific purposes. At the present time, there exist a lot of diversified models of SR. Taking into account the progress achieved in both theoretical understanding and practical application of this phenomenon, we put the focus of the present review not on discussing in depth technical details of different models and approaches but rather on presenting a general and clear physical picture of SR on a pedagogical level. Particular emphasis will be given to the implementation of SR in generic quantum systems-an issue that has received limited attention in earlier review papers on the topic. The major part of our presentation relies on the two-state model of SR (or on simple variants thereof), which is general enough to exhibit the main features of SR and, in fact, covers many (if not most) of the examples of SR published so far. In order to highlight the diversity of the two-state model, we shall discuss several examples from such different fields as condensed matter, nonlinear and quantum optics and biophysics. Finally, we also discuss some situations that go beyond the generic SR scenario but are still characterized by a constructive role of noise

  18. Quantum stochastic calculus associated with quadratic quantum noises

    International Nuclear Information System (INIS)

    Ji, Un Cig; Sinha, Kalyan B.

    2016-01-01

    We first study a class of fundamental quantum stochastic processes induced by the generators of a six dimensional non-solvable Lie †-algebra consisting of all linear combinations of the generalized Gross Laplacian and its adjoint, annihilation operator, creation operator, conservation, and time, and then we study the quantum stochastic integrals associated with the class of fundamental quantum stochastic processes, and the quantum Itô formula is revisited. The existence and uniqueness of solution of a quantum stochastic differential equation is proved. The unitarity conditions of solutions of quantum stochastic differential equations associated with the fundamental processes are examined. The quantum stochastic calculus extends the Hudson-Parthasarathy quantum stochastic calculus

  19. Quantum stochastic calculus associated with quadratic quantum noises

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Un Cig, E-mail: uncigji@chungbuk.ac.kr [Department of Mathematics, Research Institute of Mathematical Finance, Chungbuk National University, Cheongju, Chungbuk 28644 (Korea, Republic of); Sinha, Kalyan B., E-mail: kbs-jaya@yahoo.co.in [Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bangalore-64, India and Department of Mathematics, Indian Institute of Science, Bangalore-12 (India)

    2016-02-15

    We first study a class of fundamental quantum stochastic processes induced by the generators of a six dimensional non-solvable Lie †-algebra consisting of all linear combinations of the generalized Gross Laplacian and its adjoint, annihilation operator, creation operator, conservation, and time, and then we study the quantum stochastic integrals associated with the class of fundamental quantum stochastic processes, and the quantum Itô formula is revisited. The existence and uniqueness of solution of a quantum stochastic differential equation is proved. The unitarity conditions of solutions of quantum stochastic differential equations associated with the fundamental processes are examined. The quantum stochastic calculus extends the Hudson-Parthasarathy quantum stochastic calculus.

  20. Engineering two-photon high-dimensional states through quantum interference

    Science.gov (United States)

    Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew

    2016-01-01

    Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685

  1. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  2. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  3. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  4. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    Science.gov (United States)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional

  5. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    International Nuclear Information System (INIS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-01-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R n . An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R d (d<< n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology

  6. Metallic and highly conducting two-dimensional atomic arrays of sulfur enabled by molybdenum disulfide nanotemplate

    Science.gov (United States)

    Zhu, Shuze; Geng, Xiumei; Han, Yang; Benamara, Mourad; Chen, Liao; Li, Jingxiao; Bilgin, Ismail; Zhu, Hongli

    2017-10-01

    Element sulfur in nature is an insulating solid. While it has been tested that one-dimensional sulfur chain is metallic and conducting, the investigation on two-dimensional sulfur remains elusive. We report that molybdenum disulfide layers are able to serve as the nanotemplate to facilitate the formation of two-dimensional sulfur. Density functional theory calculations suggest that confined in-between layers of molybdenum disulfide, sulfur atoms are able to form two-dimensional triangular arrays that are highly metallic. As a result, these arrays contribute to the high conductivity and metallic phase of the hybrid structures of molybdenum disulfide layers and two-dimensional sulfur arrays. The experimentally measured conductivity of such hybrid structures reaches up to 223 S/m. Multiple experimental results, including X-ray photoelectron spectroscopy (XPS), transition electron microscope (TEM), selected area electron diffraction (SAED), agree with the computational insights. Due to the excellent conductivity, the current density is linearly proportional to the scan rate until 30,000 mV s-1 without the attendance of conductive additives. Using such hybrid structures as electrode, the two-electrode supercapacitor cells yield a power density of 106 Wh kg-1 and energy density 47.5 Wh kg-1 in ionic liquid electrolytes. Our findings offer new insights into using two-dimensional materials and their Van der Waals heterostructures as nanotemplates to pattern foreign atoms for unprecedented material properties.

  7. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  8. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-12-01

    Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  9. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    Science.gov (United States)

    Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2013-12-01

    Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  10. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  11. Global communication schemes for the numerical solution of high-dimensional PDEs

    DEFF Research Database (Denmark)

    Hupp, Philipp; Heene, Mario; Jacob, Riko

    2016-01-01

    The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...

  12. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  13. Model-based Clustering of High-Dimensional Data in Astrophysics

    Science.gov (United States)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  14. High-dimensional atom localization via spontaneously generated coherence in a microwave-driven atomic system.

    Science.gov (United States)

    Wang, Zhiping; Chen, Jinyu; Yu, Benli

    2017-02-20

    We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.

  15. Semilinear Kolmogorov Equations and Applications to Stochastic Optimal Control

    International Nuclear Information System (INIS)

    Masiero, Federica

    2005-01-01

    Semilinear parabolic differential equations are solved in a mild sense in an infinite-dimensional Hilbert space. Applications to stochastic optimal control problems are studied by solving the associated Hamilton-Jacobi-Bellman equation. These results are applied to some controlled stochastic partial differential equations

  16. New travelling wave solutions for nonlinear stochastic evolution

    Indian Academy of Sciences (India)

    The nonlinear stochastic evolution equations have a wide range of applications in physics, chemistry, biology, economics and finance from various points of view. In this paper, the (′/)-expansion method is implemented for obtaining new travelling wave solutions of the nonlinear (2 + 1)-dimensional stochastic ...

  17. A stochastic modeling of recurrent measles epidemic | Kassem ...

    African Journals Online (AJOL)

    A simple stochastic mathematical model is developed and investigated for the dynamics of measles epidemic. The model, which is a multi-dimensional diffusion process, includes susceptible individuals, latent (exposed), infected and removed individuals. Stochastic effects are assumed to arise in the process of infection of ...

  18. Stochastic optimization of loading pattern for PWR

    International Nuclear Information System (INIS)

    Smuc, T.; Pevec, D.

    1994-01-01

    The application of stochastic optimization methods in solving in-core fuel management problems is restrained by the need for a large number of proposed solutions loading patterns, if a high quality final solution is wanted. Proposed loading patterns have to be evaluated by core neutronics simulator, which can impose unrealistic computer time requirements. A new loading pattern optimization code Monte Carlo Loading Pattern Search has been developed by coupling the simulated annealing optimization algorithm with a fast one-and-a-half dimensional core depletion simulator. The structure of the optimization method provides more efficient performance and allows the user to empty precious experience in the search process, thus reducing the search space size. Hereinafter, we discuss the characteristics of the method and illustrate them on the results obtained by solving the PWR reload problem. (authors). 7 refs., 1 tab., 1 fig

  19. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...... multidimensional parameter. Conditions for rate optimality and effciency of estimatorsof drift-jump and diffusion parameters are given in some special cases. Theseconditions are found to extend the pre-existing conditions applicable to continuous diffusions,and impose much stronger requirements on the estimating...

  20. Stochastic development regression using method of moments

    DEFF Research Database (Denmark)

    Kühnel, Line; Sommer, Stefan Horst

    2017-01-01

    This paper considers the estimation problem arising when inferring parameters in the stochastic development regression model for manifold valued non-linear data. Stochastic development regression captures the relation between manifold-valued response and Euclidean covariate variables using...... the stochastic development construction. It is thereby able to incorporate several covariate variables and random effects. The model is intrinsically defined using the connection of the manifold, and the use of stochastic development avoids linearizing the geometry. We propose to infer parameters using...... the Method of Moments procedure that matches known constraints on moments of the observations conditional on the latent variables. The performance of the model is investigated in a simulation example using data on finite dimensional landmark manifolds....

  1. Stochastic space-time and quantum theory

    International Nuclear Information System (INIS)

    Frederick, C.

    1976-01-01

    Much of quantum mechanics may be derived if one adopts a very strong form of Mach's principle such that in the absence of mass, space-time becomes not flat, but stochastic. This is manifested in the metric tensor which is considered to be a collection of stochastic variables. The stochastic-metric assumption is sufficient to generate the spread of the wave packet in empty space. If one further notes that all observations of dynamical variables in the laboratory frame are contravariant components of tensors, and if one assumes that a Lagrangian can be constructed, then one can obtain an explanation of conjugate variables and also a derivation of the uncertainty principle. Finally the superposition of stochastic metrics and the identification of root -g in the four-dimensional invariant volume element root -g dV as the indicator of relative probability yields the phenomenon of interference as will be described for the two-slit experiment

  2. High-dimensional quantum key distribution based on multicore fiber using silicon photonic integrated circuits

    DEFF Research Database (Denmark)

    Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld

    2017-01-01

    is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...

  3. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    International Nuclear Information System (INIS)

    Hayashi, Y.; Hirose, Y.; Seno, Y.

    2016-01-01

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 "3 voxels was obtained.

  4. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Y., E-mail: y-hayashi@mosk.tytlabs.co.jp; Hirose, Y.; Seno, Y. [Toyota Central R& D Toyota Central R& D Labs., Inc., 41-1 Nagakute Aichi 480-1192 Japan (Japan)

    2016-07-27

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 {sup 3} voxels was obtained.

  5. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    Directory of Open Access Journals (Sweden)

    Thenmozhi Srinivasan

    2015-01-01

    Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.

  6. The validation and assessment of machine learning: a game of prediction from high-dimensional data

    DEFF Research Database (Denmark)

    Pers, Tune Hannes; Albrechtsen, A; Holst, C

    2009-01-01

    In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....

  7. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuxiao; Zhang, Jianming [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Liu, Yang, E-mail: yangl@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Huang, Hui [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Kang, Zhenhui, E-mail: zhkang@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. Black-Right-Pointing-Pointer MPCS was covalently modified by cysteine (MPCS-CO-Cys). Black-Right-Pointing-Pointer MPCS-CO-Cys was first time used in electrochemical detection of heavy metal ions. Black-Right-Pointing-Pointer Heavy metal ions such as Pb{sup 2+} and Cd{sup 2+} can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  8. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    International Nuclear Information System (INIS)

    Zhang, Yuxiao; Zhang, Jianming; Liu, Yang; Huang, Hui; Kang, Zhenhui

    2012-01-01

    Highlights: ► Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. ► MPCS was covalently modified by cysteine (MPCS–CO–Cys). ► MPCS–CO–Cys was first time used in electrochemical detection of heavy metal ions. ► Heavy metal ions such as Pb 2+ and Cd 2+ can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  9. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  10. Geometric integrators for stochastic rigid body dynamics

    KAUST Repository

    Tretyakov, Mikhail

    2016-01-05

    Geometric integrators play an important role in simulating dynamical systems on long time intervals with high accuracy. We will illustrate geometric integration ideas within the stochastic context, mostly on examples of stochastic thermostats for rigid body dynamics. The talk will be mainly based on joint recent work with Rusland Davidchak and Tom Ouldridge.

  11. Geometric integrators for stochastic rigid body dynamics

    KAUST Repository

    Tretyakov, Mikhail

    2016-01-01

    Geometric integrators play an important role in simulating dynamical systems on long time intervals with high accuracy. We will illustrate geometric integration ideas within the stochastic context, mostly on examples of stochastic thermostats for rigid body dynamics. The talk will be mainly based on joint recent work with Rusland Davidchak and Tom Ouldridge.

  12. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  13. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  14. Quantum stochastic calculus and representations of Lie superalgebras

    CERN Document Server

    Eyre, Timothy M W

    1998-01-01

    This book describes the representations of Lie superalgebras that are yielded by a graded version of Hudson-Parthasarathy quantum stochastic calculus. Quantum stochastic calculus and grading theory are given concise introductions, extending readership to mathematicians and physicists with a basic knowledge of algebra and infinite-dimensional Hilbert spaces. The develpment of an explicit formula for the chaotic expansion of a polynomial of quantum stochastic integrals is particularly interesting. The book aims to provide a self-contained exposition of what is known about Z_2-graded quantum stochastic calculus and to provide a framework for future research into this new and fertile area.

  15. Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement

    Directory of Open Access Journals (Sweden)

    Xianglei Liu

    2018-01-01

    Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.

  16. An irregular grid approach for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2008-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting. The method is centered around the approximation of the associated complementarity problem on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  17. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  18. Reconstruction of high-dimensional states entangled in orbital angular momentum using mutually unbiased measurements

    CSIR Research Space (South Africa)

    Giovannini, D

    2013-06-01

    Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...

  19. Three-dimensionality of field-induced magnetism in a high-temperature superconductor

    DEFF Research Database (Denmark)

    Lake, B.; Lefmann, K.; Christensen, N.B.

    2005-01-01

    Many physical properties of high-temperature superconductors are two-dimensional phenomena derived from their square-planar CuO(2) building blocks. This is especially true of the magnetism from the copper ions. As mobile charge carriers enter the CuO(2) layers, the antiferromagnetism of the parent...

  20. Finding and Visualizing Relevant Subspaces for Clustering High-Dimensional Astronomical Data Using Connected Morphological Operators

    NARCIS (Netherlands)

    Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.

    2010-01-01

    Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.

  1. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    Science.gov (United States)

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  2. Estimating the effect of a variable in a high-dimensional regression model

    DEFF Research Database (Denmark)

    Jensen, Peter Sandholt; Wurtz, Allan

    assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider  properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...

  3. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  4. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  5. Using Localised Quadratic Functions on an Irregular Grid for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose a method for pricing high-dimensional American options on an irregular grid; the method involves using quadratic functions to approximate the local effect of the Black-Scholes operator.Once such an approximation is known, one can solve the pricing problem by time stepping in an explicit

  6. An Irregular Grid Approach for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting.The method is centred around the approximation of the associated complementarity problem on an irregular grid.We approximate the partial differential operator on this grid by appealing to the SDE

  7. Pricing and hedging high-dimensional American options : an irregular grid approach

    NARCIS (Netherlands)

    Berridge, S.; Schumacher, H.

    2002-01-01

    We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  8. Applied probability and stochastic processes

    CERN Document Server

    Sumita, Ushio

    1999-01-01

    Applied Probability and Stochastic Processes is an edited work written in honor of Julien Keilson. This volume has attracted a host of scholars in applied probability, who have made major contributions to the field, and have written survey and state-of-the-art papers on a variety of applied probability topics, including, but not limited to: perturbation method, time reversible Markov chains, Poisson processes, Brownian techniques, Bayesian probability, optimal quality control, Markov decision processes, random matrices, queueing theory and a variety of applications of stochastic processes. The book has a mixture of theoretical, algorithmic, and application chapters providing examples of the cutting-edge work that Professor Keilson has done or influenced over the course of his highly-productive and energetic career in applied probability and stochastic processes. The book will be of interest to academic researchers, students, and industrial practitioners who seek to use the mathematics of applied probability i...

  9. STOCHASTIC METHODS IN RISK ANALYSIS

    Directory of Open Access Journals (Sweden)

    Vladimíra OSADSKÁ

    2017-06-01

    Full Text Available In this paper, we review basic stochastic methods which can be used to extend state-of-the-art deterministic analytical methods for risk analysis. We can conclude that the standard deterministic analytical methods highly depend on the practical experience and knowledge of the evaluator and therefore, the stochastic methods should be introduced. The new risk analysis methods should consider the uncertainties in input values. We present how large is the impact on the results of the analysis solving practical example of FMECA with uncertainties modelled using Monte Carlo sampling.

  10. Noncausal stochastic calculus

    CERN Document Server

    Ogawa, Shigeyoshi

    2017-01-01

    This book presents an elementary introduction to the theory of noncausal stochastic calculus that arises as a natural alternative to the standard theory of stochastic calculus founded in 1944 by Professor Kiyoshi Itô. As is generally known, Itô Calculus is essentially based on the "hypothesis of causality", asking random functions to be adapted to a natural filtration generated by Brownian motion or more generally by square integrable martingale. The intention in this book is to establish a stochastic calculus that is free from this "hypothesis of causality". To be more precise, a noncausal theory of stochastic calculus is developed in this book, based on the noncausal integral introduced by the author in 1979. After studying basic properties of the noncausal stochastic integral, various concrete problems of noncausal nature are considered, mostly concerning stochastic functional equations such as SDE, SIE, SPDE, and others, to show not only the necessity of such theory of noncausal stochastic calculus but ...

  11. Simple stochastic simulation.

    Science.gov (United States)

    Schilstra, Maria J; Martin, Stephen R

    2009-01-01

    Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.

  12. Zero- and two-dimensional hybrid carbon phosphors for high colorimetric purity white light-emission.

    Science.gov (United States)

    Ding, Yamei; Chang, Qing; Xiu, Fei; Chen, Yingying; Liu, Zhengdong; Ban, Chaoyi; Cheng, Shuai; Liu, Juqing; Huang, Wei

    2018-03-01

    Carbon nanomaterials are promising phosphors for white light emission. A facile single-step synthesis method has been developed to prepare zero- and two-dimensional hybrid carbon phosphors for the first time. Zero-dimensional carbon dots (C-dots) emit bright blue luminescence under 365 nm UV light and two-dimensional nanoplates improve the dispersity and film forming ability of C-dots. As a proof-of-concept application, the as-prepared hybrid carbon phosphors emit bright white luminescence in the solid state, and the phosphor-coated blue LEDs exhibit high colorimetric purity white light-emission with a color coordinate of (0.3308, 0.3312), potentially enabling the successful application of white emitting phosphors in the LED field.

  13. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    Science.gov (United States)

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  14. Computing the optimal path in stochastic dynamical systems

    International Nuclear Information System (INIS)

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-01-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  15. Stochastic control of traffic patterns

    DEFF Research Database (Denmark)

    Gaididei, Yuri B.; Gorria, Carlos; Berkemer, Rainer

    2013-01-01

    A stochastic modulation of the safety distance can reduce traffic jams. It is found that the effect of random modulation on congestive flow formation depends on the spatial correlation of the noise. Jam creation is suppressed for highly correlated noise. The results demonstrate the advantage of h...

  16. Thermal Investigation of Three-Dimensional GaN-on-SiC High Electron Mobility Transistors

    Science.gov (United States)

    2017-07-01

    University of L’Aquila, (2011). 23 Rao, H. & Bosman, G. Hot-electron induced defect generation in AlGaN/GaN high electron mobility transistors. Solid...AFRL-RY-WP-TR-2017-0143 THERMAL INVESTIGATION OF THREE- DIMENSIONAL GaN-on-SiC HIGH ELECTRON MOBILITY TRANSISTORS Qing Hao The University of Arizona...clarification memorandum dated 16 Jan 09. This report is available to the general public, including foreign nationals. Copies may be obtained from the

  17. Stochastic stability and bifurcation in a macroeconomic model

    International Nuclear Information System (INIS)

    Li Wei; Xu Wei; Zhao Junfeng; Jin Yanfei

    2007-01-01

    On the basis of the work of Goodwin and Puu, a new business cycle model subject to a stochastically parametric excitation is derived in this paper. At first, we reduce the model to a one-dimensional diffusion process by applying the stochastic averaging method of quasi-nonintegrable Hamiltonian system. Secondly, we utilize the methods of Lyapunov exponent and boundary classification associated with diffusion process respectively to analyze the stochastic stability of the trivial solution of system. The numerical results obtained illustrate that the trivial solution of system must be globally stable if it is locally stable in the state space. Thirdly, we explore the stochastic Hopf bifurcation of the business cycle model according to the qualitative changes in stationary probability density of system response. It is concluded that the stochastic Hopf bifurcation occurs at two critical parametric values. Finally, some explanations are given in a simply way on the potential applications of stochastic stability and bifurcation analysis

  18. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    Science.gov (United States)

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  19. A geometric stochastic approach based on marked point processes for road mark detection from high resolution aerial images

    Science.gov (United States)

    Tournaire, O.; Paparoditis, N.

    Road detection has been a topic of great interest in the photogrammetric and remote sensing communities since the end of the 70s. Many approaches dealing with various sensor resolutions, the nature of the scene or the wished accuracy of the extracted objects have been presented. This topic remains challenging today as the need for accurate and up-to-date data is becoming more and more important. Based on this context, we will study in this paper the road network from a particular point of view, focusing on road marks, and in particular dashed lines. Indeed, they are very useful clues, for evidence of a road, but also for tasks of a higher level. For instance, they can be used to enhance quality and to improve road databases. It is also possible to delineate the different circulation lanes, their width and functionality (speed limit, special lanes for buses or bicycles...). In this paper, we propose a new robust and accurate top-down approach for dashed line detection based on stochastic geometry. Our approach is automatic in the sense that no intervention from a human operator is necessary to initialise the algorithm or to track errors during the process. The core of our approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also has to deal with the interactions between the different objects making up a line, meaning that it introduces external knowledge taken from specifications. Our strategy is based on a stochastic method, and in particular marked point processes. Our goal is to find the objects configuration minimising an energy function made-up of a data attachment term measuring the consistency of the image with respect to the objects and a regularising term managing the relationship between neighbouring objects. To sample the energy function, we use Green algorithm's; coupled with a simulated annealing to find its minimum. Results from aerial images at various resolutions are presented showing that our

  20. Generalized reduced rank latent factor regression for high dimensional tensor fields, and neuroimaging-genetic applications.

    Science.gov (United States)

    Tao, Chenyang; Nichols, Thomas E; Hua, Xue; Ching, Christopher R K; Rolls, Edmund T; Thompson, Paul M; Feng, Jianfeng

    2017-01-01

    We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. Copyright © 2016. Published by Elsevier Inc.

  1. Stochastic Blind Motion Deblurring

    KAUST Repository

    Xiao, Lei

    2015-05-13

    Blind motion deblurring from a single image is a highly under-constrained problem with many degenerate solutions. A good approximation of the intrinsic image can therefore only be obtained with the help of prior information in the form of (often non-convex) regularization terms for both the intrinsic image and the kernel. While the best choice of image priors is still a topic of ongoing investigation, this research is made more complicated by the fact that historically each new prior requires the development of a custom optimization method. In this paper, we develop a stochastic optimization method for blind deconvolution. Since this stochastic solver does not require the explicit computation of the gradient of the objective function and uses only efficient local evaluation of the objective, new priors can be implemented and tested very quickly. We demonstrate that this framework, in combination with different image priors produces results with PSNR values that match or exceed the results obtained by much more complex state-of-the-art blind motion deblurring algorithms.

  2. Suppression of large edge localized modes with a stochastic magnetic boundary in high confinement DIII-D plasmas

    International Nuclear Information System (INIS)

    Evans, T.E.; Moyer, R.A.; Watkins, J.G.

    2005-01-01

    Large sub-millisecond heat pulses due to Type-I ELMs have been eliminated reproducibly in DIII.D for periods approaching 7 energy confinement times with small dc currents driven in a simple magnetic perturbation coil. The current required to eliminate all but a few isolated Type-I ELM impulses during a coil pulse is less than 0.4% of plasma current. Based on vacuum magnetic field line modeling, the perturbation fields resonate strongly with plasma flux surfaces across most of the pedestal region (0.9 ≤ Ψ N ≤ 1.0) when q 95 = 3.7±0.2 creating small remnant magnetic islands surrounded by weakly stochastic field lines. The stored energy, β N , H-mode quality factor and global energy confinement time are unaltered. Although some isolated ELM-like events typically occur, long periods free of large Type-I ELMs (Δt > 4-6 τ E ) have been reproduced numerous times, on multiple experimental run days including cases matching the ITER scenario 2 flux surface shape. Since large Type-I ELM impulses represent a severe constraint on the survivability of the divertor target plates in future fusion devices such as ITER, a proven method of eliminating these impulses is critical for the development of tokamak reactors. Results presented in this paper indicate that non-axisymmetric edge magnetic perturbations could be a promising option for controlling ELMs in future tokamaks such as ITER. (author)

  3. Multiple fields in stochastic inflation

    Energy Technology Data Exchange (ETDEWEB)

    Assadullahi, Hooshyar [Institute of Cosmology & Gravitation, University of Portsmouth,Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX (United Kingdom); Firouzjahi, Hassan [School of Astronomy, Institute for Research in Fundamental Sciences (IPM),P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Noorbala, Mahdiyar [Department of Physics, University of Tehran,P.O. Box 14395-547, Tehran (Iran, Islamic Republic of); School of Astronomy, Institute for Research in Fundamental Sciences (IPM),P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Vennin, Vincent; Wands, David [Institute of Cosmology & Gravitation, University of Portsmouth,Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX (United Kingdom)

    2016-06-24

    Stochastic effects in multi-field inflationary scenarios are investigated. A hierarchy of diffusion equations is derived, the solutions of which yield moments of the numbers of inflationary e-folds. Solving the resulting partial differential equations in multi-dimensional field space is more challenging than the single-field case. A few tractable examples are discussed, which show that the number of fields is, in general, a critical parameter. When more than two fields are present for instance, the probability to explore arbitrarily large-field regions of the potential, otherwise inaccessible to single-field dynamics, becomes non-zero. In some configurations, this gives rise to an infinite mean number of e-folds, regardless of the initial conditions. Another difference with respect to single-field scenarios is that multi-field stochastic effects can be large even at sub-Planckian energy. This opens interesting new possibilities for probing quantum effects in inflationary dynamics, since the moments of the numbers of e-folds can be used to calculate the distribution of primordial density perturbations in the stochastic-δN formalism.

  4. Distributed Adaptive Neural Network Output Tracking of Leader-Following High-Order Stochastic Nonlinear Multiagent Systems With Unknown Dead-Zone Input.

    Science.gov (United States)

    Hua, Changchun; Zhang, Liuliu; Guan, Xinping

    2017-01-01

    This paper studies the problem of distributed output tracking consensus control for a class of high-order stochastic nonlinear multiagent systems with unknown nonlinear dead-zone under a directed graph topology. The adaptive neural networks are used to approximate the unknown nonlinear functions and a new inequality is used to deal with the completely unknown dead-zone input. Then, we design the controllers based on backstepping method and the dynamic surface control technique. It is strictly proved that the resulting closed-loop system is stable in probability in the sense of semiglobally uniform ultimate boundedness and the tracking errors between the leader and the followers approach to a small residual set based on Lyapunov stability theory. Finally, two simulation examples are presented to show the effectiveness and the advantages of the proposed techniques.

  5. Extinction in neutrally stable stochastic Lotka-Volterra models

    Science.gov (United States)

    Dobrinevski, Alexander; Frey, Erwin

    2012-05-01

    Populations of competing biological species exhibit a fascinating interplay between the nonlinear dynamics of evolutionary selection forces and random fluctuations arising from the stochastic nature of the interactions. The processes leading to extinction of species, whose understanding is a key component in the study of evolution and biodiversity, are influenced by both of these factors. Here, we investigate a class of stochastic population dynamics models based on generalized Lotka-Volterra systems. In the case of neutral stability of the underlying deterministic model, the impact of intrinsic noise on the survival of species is dramatic: It destroys coexistence of interacting species on a time scale proportional to the population size. We introduce a new method based on stochastic averaging which allows one to understand this extinction process quantitatively by reduction to a lower-dimensional effective dynamics. This is performed analytically for two highly symmetrical models and can be generalized numerically to more complex situations. The extinction probability distributions and other quantities of interest we obtain show excellent agreement with simulations.

  6. Adaptive stochastic Galerkin FEM with hierarchical tensor representations

    KAUST Repository

    Eigel, Martin

    2016-01-08

    PDE with stochastic data usually lead to very high-dimensional algebraic problems which easily become unfeasible for numerical computations because of the dense coupling structure of the discretised stochastic operator. Recently, an adaptive stochastic Galerkin FEM based on a residual a posteriori error estimator was presented and the convergence of the adaptive algorithm was shown. While this approach leads to a drastic reduction of the complexity of the problem due to the iterative discovery of the sparsity of the solution, the problem size and structure is still rather limited. To allow for larger and more general problems, we exploit the tensor structure of the parametric problem by representing operator and solution iterates in the tensor train (TT) format. The (successive) compression carried out with these representations can be seen as a generalisation of some other model reduction techniques, e.g. the reduced basis method. We show that this approach facilitates the efficient computation of different error indicators related to the computational mesh, the active polynomial chaos index set, and the TT rank. In particular, the curse of dimension is avoided.

  7. Dissecting high-dimensional phenotypes with bayesian sparse factor analysis of genetic covariance matrices.

    Science.gov (United States)

    Runcie, Daniel E; Mukherjee, Sayan

    2013-07-01

    Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.

  8. Three-dimensional true FISP for high-resolution imaging of the whole brain

    International Nuclear Information System (INIS)

    Schmitz, B.; Hagen, T.; Reith, W.

    2003-01-01

    While high-resolution T1-weighted sequences, such as three-dimensional magnetization-prepared rapid gradient-echo imaging, are widely available, there is a lack of an equivalent fast high-resolution sequence providing T2 contrast. Using fast high-performance gradient systems we show the feasibility of three-dimensional true fast imaging with steady-state precession (FISP) to fill this gap. We applied a three-dimensional true-FISP protocol with voxel sizes down to 0.5 x 0.5 x 0.5 mm and acquisition times of approximately 8 min on a 1.5-T Sonata (Siemens, Erlangen, Germany) magnetic resonance scanner. The sequence was included into routine brain imaging protocols for patients with cerebrospinal-fluid-related intracranial pathology. Images from 20 patients and 20 healthy volunteers were evaluated by two neuroradiologists with respect to diagnostic image quality and artifacts. All true-FISP scans showed excellent imaging quality free of artifacts in patients and volunteers. They were valuable for the assessment of anatomical and pathologic aspects of the included patients. High-resolution true-FISP imaging is a valuable adjunct for the exploration and neuronavigation of intracranial pathologies especially if cerebrospinal fluid is involved. (orig.)

  9. On orthogonality preserving quadratic stochastic operators

    Energy Technology Data Exchange (ETDEWEB)

    Mukhamedov, Farrukh; Taha, Muhammad Hafizuddin Mohd [Department of Computational and Theoretical Sciences, Faculty of Science International Islamic University Malaysia, P.O. Box 141, 25710 Kuantan, Pahang Malaysia (Malaysia)

    2015-05-15

    A quadratic stochastic operator (in short QSO) is usually used to present the time evolution of differing species in biology. Some quadratic stochastic operators have been studied by Lotka and Volterra. In the present paper, we first give a simple characterization of Volterra QSO in terms of absolutely continuity of discrete measures. Further, we introduce a notion of orthogonal preserving QSO, and describe such kind of operators defined on two dimensional simplex. It turns out that orthogonal preserving QSOs are permutations of Volterra QSO. The associativity of genetic algebras generated by orthogonal preserving QSO is studied too.

  10. On orthogonality preserving quadratic stochastic operators

    International Nuclear Information System (INIS)

    Mukhamedov, Farrukh; Taha, Muhammad Hafizuddin Mohd

    2015-01-01

    A quadratic stochastic operator (in short QSO) is usually used to present the time evolution of differing species in biology. Some quadratic stochastic operators have been studied by Lotka and Volterra. In the present paper, we first give a simple characterization of Volterra QSO in terms of absolutely continuity of discrete measures. Further, we introduce a notion of orthogonal preserving QSO, and describe such kind of operators defined on two dimensional simplex. It turns out that orthogonal preserving QSOs are permutations of Volterra QSO. The associativity of genetic algebras generated by orthogonal preserving QSO is studied too

  11. Elitism and Stochastic Dominance

    OpenAIRE

    Bazen, Stephen; Moyes, Patrick

    2011-01-01

    Stochastic dominance has typically been used with a special emphasis on risk and inequality reduction something captured by the concavity of the utility function in the expected utility model. We claim that the applicability of the stochastic dominance approach goes far beyond risk and inequality measurement provided suitable adpations be made. We apply in the paper the stochastic dominance approach to the measurment of elitism which may be considered the opposite of egalitarianism. While the...

  12. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    International Nuclear Information System (INIS)

    Snyder, Abigail C.; Jiao, Yu

    2010-01-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  13. A Dissimilarity Measure for Clustering High- and Infinite Dimensional Data that Satisfies the Triangle Inequality

    Science.gov (United States)

    Socolovsky, Eduardo A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The cosine or correlation measures of similarity used to cluster high dimensional data are interpreted as projections, and the orthogonal components are used to define a complementary dissimilarity measure to form a similarity-dissimilarity measure pair. Using a geometrical approach, a number of properties of this pair is established. This approach is also extended to general inner-product spaces of any dimension. These properties include the triangle inequality for the defined dissimilarity measure, error estimates for the triangle inequality and bounds on both measures that can be obtained with a few floating-point operations from previously computed values of the measures. The bounds and error estimates for the similarity and dissimilarity measures can be used to reduce the computational complexity of clustering algorithms and enhance their scalability, and the triangle inequality allows the design of clustering algorithms for high dimensional distributed data.

  14. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    Directory of Open Access Journals (Sweden)

    András Király

    2014-01-01

    Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.

  15. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    Science.gov (United States)

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. One- and two-dimensional sublattices as preconditions for high-Tc superconductivity

    International Nuclear Information System (INIS)

    Krueger, E.

    1989-01-01

    In an earlier paper it was proposed describing superconductivity in the framework of a nonadiabatic Heisenberg model in order to interprete the outstanding symmetry proper ties of the (spin-dependent) Wannier functions in the conduction bands of superconductors. This new group-theoretical model suggests that Cooper pair formation can only be mediated by boson excitations carrying crystal-spin-angular momentum. While in the three-dimensionally isotropic lattices of the standard superconductors phonons are able to transport crystal-spin-angular momentum, this is not true for phonons propagating through the one- or two-dimensional Cu-O sublattices of the high-T c compounds. Therefore, if such an anisotropic material is superconducting, it is necessarily higher-energetic excitations (of well-defined symmetry) which mediate pair formation. This fact is proposed being responsible for the high transition temperatures of these compounds. (author)

  17. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  18. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    Science.gov (United States)

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  19. Singular stochastic differential equations

    CERN Document Server

    Cherny, Alexander S

    2005-01-01

    The authors introduce, in this research monograph on stochastic differential equations, a class of points termed isolated singular points. Stochastic differential equations possessing such points (called singular stochastic differential equations here) arise often in theory and in applications. However, known conditions for the existence and uniqueness of a solution typically fail for such equations. The book concentrates on the study of the existence, the uniqueness, and, what is most important, on the qualitative behaviour of solutions of singular stochastic differential equations. This is done by providing a qualitative classification of isolated singular points, into 48 possible types.

  20. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Tétreault, Nicolas

    2011-11-09

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  1. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Té treault, Nicolas; Arsenault, É ric; Heiniger, Leo-Philipp; Soheilnia, Navid; Brillet, Jé ré mie; Moehl, Thomas; Zakeeruddin, Shaik; Ozin, Geoffrey A.; Grä tzel, Michael

    2011-01-01

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  2. Cooperative simulation of lithography and topography for three-dimensional high-aspect-ratio etching

    Science.gov (United States)

    Ichikawa, Takashi; Yagisawa, Takashi; Furukawa, Shinichi; Taguchi, Takafumi; Nojima, Shigeki; Murakami, Sadatoshi; Tamaoki, Naoki

    2018-06-01

    A topography simulation of high-aspect-ratio etching considering transports of ions and neutrals is performed, and the mechanism of reactive ion etching (RIE) residues in three-dimensional corner patterns is revealed. Limited ion flux and CF2 diffusion from the wide space of the corner is found to have an effect on the RIE residues. Cooperative simulation of lithography and topography is used to solve the RIE residue problem.

  3. Reduced, three-dimensional, nonlinear equations for high-β plasmas including toroidal effects

    International Nuclear Information System (INIS)

    Schmalz, R.

    1980-11-01

    The resistive MHD equations for toroidal plasma configurations are reduced by expanding to the second order in epsilon, the inverse aspect ratio, allowing for high β = μsub(o)p/B 2 of order epsilon. The result is a closed system of nonlinear, three-dimensional equations where the fast magnetohydrodynamic time scale is eliminated. In particular, the equation for the toroidal velocity remains decoupled. (orig.)

  4. Two and dimensional heat analysis inside a high pressure electrical discharge tube

    International Nuclear Information System (INIS)

    Aghanajafi, C.; Dehghani, A. R.; Fallah Abbasi, M.

    2005-01-01

    This article represents the heat transfer analysis for a horizontal high pressure mercury steam tube. To get a more realistic numerical simulation, heat radiation at different wavelength width bands, has been used besides convection and conduction heat transfer. The analysis for different gases with different pressure in two and three dimensional cases has been investigated and the results compared with empirical and semi empirical values. The effect of the environmental temperature on the arc tube temperature is also studied

  5. Controlling chaos in low and high dimensional systems with periodic parametric perturbations

    International Nuclear Information System (INIS)

    Mirus, K.A.; Sprott, J.C.

    1998-06-01

    The effect of applying a periodic perturbation to an accessible parameter of various chaotic systems is examined. Numerical results indicate that perturbation frequencies near the natural frequencies of the unstable periodic orbits of the chaotic systems can result in limit cycles for relatively small perturbations. Such perturbations can also control or significantly reduce the dimension of high-dimensional systems. Initial application to the control of fluctuations in a prototypical magnetic fusion plasma device will be reviewed

  6. GAMLSS for high-dimensional data – a flexible approach based on boosting

    OpenAIRE

    Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas; Schmid, Matthias

    2010-01-01

    Generalized additive models for location, scale and shape (GAMLSS) are a popular semi-parametric modelling approach that, in contrast to conventional GAMs, regress not only the expected mean but every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSS are infeasible for high-dimensional data setups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algo...

  7. Preface [HD3-2015: International meeting on high-dimensional data-driven science

    International Nuclear Information System (INIS)

    2016-01-01

    A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)

  8. Scaling of the stochastic broadening from low mn, high mn, and peeling-ballooning magnetic perturbations in the DIII-D tokamak

    Science.gov (United States)

    Zhao, Michael; Punjabi, Alkesh; Ali, Halima

    2009-11-01

    The equilibrium EFIT data for the DIII-D shot 115467 is used to construct the equilibrium generating function for magnetic field line trajectories in the DIII-D tokamak in natural canonical coordinates [A. Punjabi, and H. Ali, Phys. Plasmas 15, 122502 (2008)]. A canonical transformation is used to construct an area-preserving map for field line trajectories in the natural canonical coordinates in the DIII-D. Maps in natural canonical coordinates have the advantage that natural canonical coordinates can be inverted to calculate real space coordinates (R,Z,φ), and there is no problem in crossing the separatrix. This is not possible for magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)]. This map is applied to calculate stochastic broadening from the low mn (m,n)=(1,1)+(1,-1); high mn (m,n)=(4,1)+(3,1); and the peeling-ballooning (m,n)=(40,10)+(30,10) magnetic perturbations. In all three cases, the scaling of the widths of stochastic layer near the X-point in the principal plane of the DIII-D deviates at most by 6% from the .5ex1 -.1em/ -.15em.25ex2 power Boozer-Rechester scaling [A. Boozer, and A. Rechester, Phys. Fluids 21, 682 (1978)]. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.

  9. STOCHASTIC CHARACTERISTICS AND MODELING OF RELATIVE ...

    African Journals Online (AJOL)

    Test

    Results are highly accurate and promising for all models based on Lewis' criteria. ... hydrological cycle. Future increases in ... STOCHASTIC CHARACTERISTICS AND MODELING OF RELATIVE HUMIDITY OF OGUN BASIN, NIGERIA. 71 ...

  10. The intrinsic stochasticity of near-integrable Hamiltonian systems

    Energy Technology Data Exchange (ETDEWEB)

    Krlin, L [Ceskoslovenska Akademie Ved, Prague (Czechoslovakia). Ustav Fyziky Plazmatu

    1989-09-01

    Under certain conditions, the dynamics of near-integrable Hamiltonian systems appears to be stochastic. This stochasticity (intrinsic stochasticity, or deterministic chaos) is closely related to the Kolmogorov-Arnold-Moser (KAM) theorem of the stability of near-integrable multiperiodic Hamiltonian systems. The effect of the intrinsic stochasticity attracts still growing attention both in theory and in various applications in contemporary physics. The paper discusses the relation of the intrinsic stochasticity to the modern ergodic theory and to the KAM theorem, and describes some numerical experiments on related astrophysical and high-temperature plasma problems. Some open questions are mentioned in conclusion. (author).

  11. High-definition resolution three-dimensional imaging systems in laparoscopic radical prostatectomy: randomized comparative study with high-definition resolution two-dimensional systems.

    Science.gov (United States)

    Kinoshita, Hidefumi; Nakagawa, Ken; Usui, Yukio; Iwamura, Masatsugu; Ito, Akihiro; Miyajima, Akira; Hoshi, Akio; Arai, Yoichi; Baba, Shiro; Matsuda, Tadashi

    2015-08-01

    Three-dimensional (3D) imaging systems have been introduced worldwide for surgical instrumentation. A difficulty of laparoscopic surgery involves converting two-dimensional (2D) images into 3D images and depth perception rearrangement. 3D imaging may remove the need for depth perception rearrangement and therefore have clinical benefits. We conducted a multicenter, open-label, randomized trial to compare the surgical outcome of 3D-high-definition (HD) resolution and 2D-HD imaging in laparoscopic radical prostatectomy (LRP), in order to determine whether an LRP under HD resolution 3D imaging is superior to that under HD resolution 2D imaging in perioperative outcome, feasibility, and fatigue. One-hundred twenty-two patients were randomly assigned to a 2D or 3D group. The primary outcome was time to perform vesicourethral anastomosis (VUA), which is technically demanding and may include a number of technical difficulties considered in laparoscopic surgeries. VUA time was not significantly shorter in the 3D group (26.7 min, mean) compared with the 2D group (30.1 min, mean) (p = 0.11, Student's t test). However, experienced surgeons and 3D-HD imaging were independent predictors for shorter VUA times (p = 0.000, p = 0.014, multivariate logistic regression analysis). Total pneumoperitoneum time was not different. No conversion case from 3D to 2D or LRP to open RP was observed. Fatigue was evaluated by a simulation sickness questionnaire and critical flicker frequency. Results were not different between the two groups. Subjective feasibility and satisfaction scores were significantly higher in the 3D group. Using a 3D imaging system in LRP may have only limited advantages in decreasing operation times over 2D imaging systems. However, the 3D system increased surgical feasibility and decreased surgeons' effort levels without inducing significant fatigue.

  12. Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2009-01-01

    Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold

  13. Dimensional measurement of micro parts with high aspect ratio in HIT-UOI

    Science.gov (United States)

    Dang, Hong; Cui, Jiwen; Feng, Kunpeng; Li, Junying; Zhao, Shiyuan; Zhang, Haoran; Tan, Jiubin

    2016-11-01

    Micro parts with high aspect ratios have been widely used in different fields including aerospace and defense industries, while the dimensional measurement of these micro parts becomes a challenge in the field of precision measurement and instrument. To deal with this contradiction, several probes for the micro parts precision measurement have been proposed by researchers in Center of Ultra-precision Optoelectronic Instrument (UOI), Harbin Institute of Technology (HIT). In this paper, optical fiber probes with structures of spherical coupling(SC) with double optical fibers, micro focal-length collimation (MFL-collimation) and fiber Bragg grating (FBG) are described in detail. After introducing the sensing principles, both advantages and disadvantages of these probes are analyzed respectively. In order to improve the performances of these probes, several approaches are proposed. A two-dimensional orthogonal path arrangement is propounded to enhance the dimensional measurement ability of MFL-collimation probes, while a high resolution and response speed interrogation method based on differential method is used to improve the accuracy and dynamic characteristics of the FBG probes. The experiments for these special structural fiber probes are given with a focus on the characteristics of these probes, and engineering applications will also be presented to prove the availability of them. In order to improve the accuracy and the instantaneity of the engineering applications, several techniques are used in probe integration. The effectiveness of these fiber probes were therefore verified through both the analysis and experiments.

  14. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    Science.gov (United States)

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  15. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  16. High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.

    Science.gov (United States)

    Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton

    2017-11-03

    Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.

  17. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  18. CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Malgorzata Nowicka

    2017-05-01

    Full Text Available High dimensional mass and flow cytometry (HDCyto experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots, reporting of clustering results (dimensionality reduction, heatmaps with dendrograms and differential analyses (e.g. plots of aggregated signals.

  19. From complex to simple: interdisciplinary stochastic models

    International Nuclear Information System (INIS)

    Mazilu, D A; Zamora, G; Mazilu, I

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions for certain physical quantities, such as the time dependence of the length of the microtubules, and diffusion coefficients. The second one is a stochastic adsorption model with applications in surface deposition, epidemics and voter systems. We introduce the ‘empty interval method’ and show sample calculations for the time-dependent particle density. These models can serve as an introduction to the field of non-equilibrium statistical physics, and can also be used as a pedagogical tool to exemplify standard statistical physics concepts, such as random walks or the kinetic approach of the master equation. (paper)

  20. High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1984-01-01

    Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)

  1. Preparation of three-dimensional graphene foam for high performance supercapacitors

    Directory of Open Access Journals (Sweden)

    Yunjie Ping

    2017-04-01

    Full Text Available Supercapacitor is a new type of energy-storage device, and has been attracted widely attentions. As a two dimensional (2D nanomaterials, graphene is considered to be a promising material of supercapacitor because of its excellent properties involving high electrical conductivity and large surface area. In this paper, the large-scale graphene is successfully fabricated via environmental-friendly electrochemical exfoliation of graphite, and then, the three dimensional (3D graphene foam is prepared by using nickel foam as template and FeCl3/HCl solution as etchant. Compared with the regular 2D graphene paper, the 3D graphene foam electrode shows better electrochemical performance, and exhibits the largest specific capacitance of approximately 128 F/g at the current density of 1 A/g in 6 M KOH electrolyte. It is expected that the 3D graphene foam will have a potential application in the supercapacitors.

  2. Four-dimensional (4D) tracking of high-temperature microparticles

    International Nuclear Information System (INIS)

    Wang, Zhehui; Liu, Q.; Waganaar, W.; Fontanese, J.; James, D.; Munsat, T.

    2016-01-01

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  3. Hierarchical one-dimensional ammonium nickel phosphate microrods for high-performance pseudocapacitors

    CSIR Research Space (South Africa)

    Raju, K

    2015-12-01

    Full Text Available :17629 | DOI: 10.1038/srep17629 www.nature.com/scientificreports Hierarchical One-Dimensional Ammonium Nickel Phosphate Microrods for High-Performance Pseudocapacitors Kumar Raju1 & Kenneth I. Ozoemena1,2 High-performance electrochemical capacitors... OPEN w w w . n a t u r e . c o m / s c i e n t i f i c r e p o r t s / 2S C I E N T I F I C REPORTS | 5:17629 | DOI: 10.1038/srep17629 Hierarchical 1-D and 2-D materials maximize the supercapacitive properties due to their unique ability to permit ion...

  4. On the use of multi-dimensional scaling and electromagnetic tracking in high dose rate brachytherapy

    Science.gov (United States)

    Götz, Th I.; Ermer, M.; Salas-González, D.; Kellermeier, M.; Strnad, V.; Bert, Ch; Hensel, B.; Tomé, A. M.; Lang, E. W.

    2017-10-01

    High dose rate brachytherapy affords a frequent reassurance of the precise dwell positions of the radiation source. The current investigation proposes a multi-dimensional scaling transformation of both data sets to estimate dwell positions without any external reference. Furthermore, the related distributions of dwell positions are characterized by uni—or bi—modal heavy—tailed distributions. The latter are well represented by α—stable distributions. The newly proposed data analysis provides dwell position deviations with high accuracy, and, furthermore, offers a convenient visualization of the actual shapes of the catheters which guide the radiation source during the treatment.

  5. High-dimensional data: p >> n in mathematical statistics and bio-medical applications

    OpenAIRE

    Van De Geer, Sara A.; Van Houwelingen, Hans C.

    2004-01-01

    The workshop 'High-dimensional data: p >> n in mathematical statistics and bio-medical applications' was held at the Lorentz Center in Leiden from 9 to 20 September 2002. This special issue of Bernoulli contains a selection of papers presented at that workshop. ¶ The introduction of high-throughput micro-array technology to measure gene-expression levels and the publication of the pioneering paper by Golub et al. (1999) has brought to life a whole new branch of data analysis under the name of...

  6. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  7. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  8. Instantaneous stochastic perturbation theory

    International Nuclear Information System (INIS)

    Lüscher, Martin

    2015-01-01

    A form of stochastic perturbation theory is described, where the representative stochastic fields are generated instantaneously rather than through a Markov process. The correctness of the procedure is established to all orders of the expansion and for a wide class of field theories that includes all common formulations of lattice QCD.

  9. Stochastic climate theory

    NARCIS (Netherlands)

    Gottwald, G.A.; Crommelin, D.T.; Franzke, C.L.E.; Franzke, C.L.E.; O'Kane, T.J.

    2017-01-01

    In this chapter we review stochastic modelling methods in climate science. First we provide a conceptual framework for stochastic modelling of deterministic dynamical systems based on the Mori-Zwanzig formalism. The Mori-Zwanzig equations contain a Markov term, a memory term and a term suggestive of

  10. On Stochastic Dependence

    Science.gov (United States)

    Meyer, Joerg M.

    2018-01-01

    The contrary of stochastic independence splits up into two cases: pairs of events being favourable or being unfavourable. Examples show that both notions have quite unexpected properties, some of them being opposite to intuition. For example, transitivity does not hold. Stochastic dependence is also useful to explain cases of Simpson's paradox.

  11. Stochastic quantization and gravity

    International Nuclear Information System (INIS)

    Rumpf, H.

    1984-01-01

    We give a preliminary account of the application of stochastic quantization to the gravitational field. We start in Section I from Nelson's formulation of quantum mechanics as Newtonian stochastic mechanics and only then introduce the Parisi-Wu stochastic quantization scheme on which all the later discussion will be based. In Section II we present a generalization of the scheme that is applicable to fields in physical (i.e. Lorentzian) space-time and treat the free linearized gravitational field in this manner. The most remarkable result of this is the noncausal propagation of conformal gravitons. Moreover the concept of stochastic gauge-fixing is introduced and a complete discussion of all the covariant gauges is given. A special symmetry relating two classes of covariant gauges is exhibited. Finally Section III contains some preliminary remarks on full nonlinear gravity. In particular we argue that in contrast to gauge fields the stochastic gravitational field cannot be transformed to a Gaussian process. (Author)

  12. Stochastic neuron models

    CERN Document Server

    Greenwood, Priscilla E

    2016-01-01

    This book describes a large number of open problems in the theory of stochastic neural systems, with the aim of enticing probabilists to work on them. This includes problems arising from stochastic models of individual neurons as well as those arising from stochastic models of the activities of small and large networks of interconnected neurons. The necessary neuroscience background to these problems is outlined within the text, so readers can grasp the context in which they arise. This book will be useful for graduate students and instructors providing material and references for applying probability to stochastic neuron modeling. Methods and results are presented, but the emphasis is on questions where additional stochastic analysis may contribute neuroscience insight. An extensive bibliography is included. Dr. Priscilla E. Greenwood is a Professor Emerita in the Department of Mathematics at the University of British Columbia. Dr. Lawrence M. Ward is a Professor in the Department of Psychology and the Brain...

  13. Five-dimensional visualization of phase transition in BiNiO3 under high pressure

    International Nuclear Information System (INIS)

    Liu, Yijin; Wang, Junyue; Yang, Wenge; Azuma, Masaki; Mao, Wendy L.

    2014-01-01

    Colossal negative thermal expansion was recently discovered in BiNiO 3 associated with a low density to high density phase transition under high pressure. The varying proportion of co-existing phases plays a key role in the macroscopic behavior of this material. Here, we utilize a recently developed X-ray Absorption Near Edge Spectroscopy Tomography method and resolve the mixture of high/low pressure phases as a function of pressure at tens of nanometer resolution taking advantage of the charge transfer during the transition. This five-dimensional (X, Y, Z, energy, and pressure) visualization of the phase boundary provides a high resolution method to study the interface dynamics of high/low pressure phase

  14. Characterization of differentially expressed genes using high-dimensional co-expression networks

    DEFF Research Database (Denmark)

    Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.

    2010-01-01

    We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...

  15. High-resolution coherent three-dimensional spectroscopy of Br2.

    Science.gov (United States)

    Chen, Peter C; Wells, Thresa A; Strangfeld, Benjamin R

    2013-07-25

    In the past, high-resolution spectroscopy has been limited to small, simple molecules that yield relatively uncongested spectra. Larger and more complex molecules have a higher density of peaks and are susceptible to complications (e.g., effects from conical intersections) that can obscure the patterns needed to resolve and assign peaks. Recently, high-resolution coherent two-dimensional (2D) spectroscopy has been used to resolve and sort peaks into easily identifiable patterns for molecules where pattern-recognition has been difficult. For very highly congested spectra, however, the ability to resolve peaks using coherent 2D spectroscopy is limited by the bandwidth of instrumentation. In this article, we introduce and investigate high-resolution coherent three-dimensional spectroscopy (HRC3D) as a method for dealing with heavily congested systems. The resulting patterns are unlike those in high-resolution coherent 2D spectra. Analysis of HRC3D spectra could provide a means for exploring the spectroscopy of large and complex molecules that have previously been considered too difficult to study.

  16. Three-dimensional graphene/polyaniline composite material for high-performance supercapacitor applications

    International Nuclear Information System (INIS)

    Liu, Huili; Wang, Yi; Gou, Xinglong; Qi, Tao; Yang, Jun; Ding, Yulong

    2013-01-01

    Highlights: ► A novel 3D graphene showed high specific surface area and large mesopore volume. ► Aniline monomer was polymerized in the presence of 3D graphene at room temperature. ► The supercapacitive properties were studied by CV and charge–discharge tests. ► The composite show a high gravimetric capacitance and good cyclic stability. ► The 3D graphene/polyaniline has never been report before our work. -- Abstract: A novel three-dimensional (3D) graphene/polyaniline nanocomposite material which is synthesized using in situ polymerization of aniline monomer on the graphene surface is reported as an electrode for supercapacitors. The morphology and structure of the material are characterized by scanning electron microscopy (SEM), transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD). The electrochemical properties of the resulting materials are systematically studied using cyclic voltammetry (CV) and constant current charge–discharge tests. A high gravimetric capacitance of 463 F g −1 at a scan rate of 1 mV s −1 is obtained by means of CVs with 3 mol L −1 KOH as the electrolyte. In addition, the composite material shows only 9.4% capacity loss after 500 cycles, indicating better cyclic stability for supercapacitor applications. The high specific surface area, large mesopore volume and three-dimensional nanoporous structure of 3D graphene could contribute to the high specific capacitance and good cyclic life

  17. Three-Dimensional Numerical Analysis of an Operating Helical Rotor Pump at High Speeds and High Pressures including Cavitation

    Directory of Open Access Journals (Sweden)

    Zhou Yang

    2017-01-01

    Full Text Available High pressures, high speeds, low noise and miniaturization is the direction of development in hydraulic pump. According to the development trend, an operating helical rotor pump (HRP at high speeds and high pressures has been designed and produced, which rotational speed can reach 12000r/min and outlet pressure is as high as 25MPa. Three-dimensional simulation with and without cavitation inside the HRP is completed by the means of the computational fluid dynamics (CFD in this paper, which contributes to understand the complex fluid flow inside it. Moreover, the influences of the rotational speeds of the HRP with and without cavitation has been simulated at 25MPa.

  18. TSAR: a program for automatic resonance assignment using 2D cross-sections of high dimensionality, high-resolution spectra

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor [University of Warsaw, Faculty of Chemistry (Poland); Billeter, Martin, E-mail: martin.billeter@chem.gu.se [University of Gothenburg, Biophysics Group, Department of Chemistry and Molecular Biology (Sweden)

    2012-09-15

    While NMR studies of proteins typically aim at structure, dynamics or interactions, resonance assignments represent in almost all cases the initial step of the analysis. With increasing complexity of the NMR spectra, for example due to decreasing extent of ordered structure, this task often becomes both difficult and time-consuming, and the recording of high-dimensional data with high-resolution may be essential. Random sampling of the evolution time space, combined with sparse multidimensional Fourier transform (SMFT), allows for efficient recording of very high dimensional spectra ({>=}4 dimensions) while maintaining high resolution. However, the nature of this data demands for automation of the assignment process. Here we present the program TSAR (Tool for SMFT-based Assignment of Resonances), which exploits all advantages of SMFT input. Moreover, its flexibility allows to process data from any type of experiments that provide sequential connectivities. The algorithm was tested on several protein samples, including a disordered 81-residue fragment of the {delta} subunit of RNA polymerase from Bacillus subtilis containing various repetitive sequences. For our test examples, TSAR achieves a high percentage of assigned residues without any erroneous assignments.

  19. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  20. On-chip generation of high-dimensional entangled quantum states and their coherent control.

    Science.gov (United States)

    Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto

    2017-06-28

    Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.

  1. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and "hidden" dimensions.

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.

  2. Pure Cs4PbBr6: Highly Luminescent Zero-Dimensional Perovskite Solids

    KAUST Repository

    Saidaminov, Makhsud I.

    2016-09-26

    So-called zero-dimensional perovskites, such as Cs4PbBr6, promise outstanding emissive properties. However, Cs4PbBr6 is mostly prepared by melting of precursors that usually leads to a coformation of undesired phases. Here, we report a simple low-temperature solution-processed synthesis of pure Cs4PbBr6 with remarkable emission properties. We found that pure Cs4PbBr6 in solid form exhibits a 45% photoluminescence quantum yield (PLQY), in contrast to its three-dimensional counterpart, CsPbBr3, which exhibits more than 2 orders of magnitude lower PLQY. Such a PLQY of Cs4PbBr6 is significantly higher than that of other solid forms of lower-dimensional metal halide perovskite derivatives and perovskite nanocrystals. We attribute this dramatic increase in PL to the high exciton binding energy, which we estimate to be ∼353 meV, likely induced by the unique Bergerhoff–Schmitz–Dumont-type crystal structure of Cs4PbBr6, in which metal-halide-comprised octahedra are spatially confined. Our findings bring this class of perovskite derivatives to the forefront of color-converting and light-emitting applications.

  3. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huttmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.; Bednarczyk, P.

    1992-01-01

    High resolution γ-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig

  4. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  5. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S; Huttmeier, U J; France, G de; Haas, B; Romain, P; Theisen, Ch; Vivien, J P; Zen, J [Centre National de la Recherche Scientifique (CNRS), 67 - Strasbourg (France); Bednarczyk, P [Institute of Nuclear Physics, Cracow (Poland)

    1992-08-01

    High resolution {gamma}-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig.

  6. Highly Efficient Broadband Yellow Phosphor Based on Zero-Dimensional Tin Mixed-Halide Perovskite.

    Science.gov (United States)

    Zhou, Chenkun; Tian, Yu; Yuan, Zhao; Lin, Haoran; Chen, Banghao; Clark, Ronald; Dilbeck, Tristan; Zhou, Yan; Hurley, Joseph; Neu, Jennifer; Besara, Tiglet; Siegrist, Theo; Djurovich, Peter; Ma, Biwu

    2017-12-27

    Organic-inorganic hybrid metal halide perovskites have emerged as a highly promising class of light emitters, which can be used as phosphors for optically pumped white light-emitting diodes (WLEDs). By controlling the structural dimensionality, metal halide perovskites can exhibit tunable narrow and broadband emissions from the free-exciton and self-trapped excited states, respectively. Here, we report a highly efficient broadband yellow light emitter based on zero-dimensional tin mixed-halide perovskite (C 4 N 2 H 14 Br) 4 SnBr x I 6-x (x = 3). This rare-earth-free ionically bonded crystalline material possesses a perfect host-dopant structure, in which the light-emitting metal halide species (SnBr x I 6-x 4- , x = 3) are completely isolated from each other and embedded in the wide band gap organic matrix composed of C 4 N 2 H 14 Br - . The strongly Stokes-shifted broadband yellow emission that peaked at 582 nm from this phosphor, which is a result of excited state structural reorganization, has an extremely large full width at half-maximum of 126 nm and a high photoluminescence quantum efficiency of ∼85% at room temperature. UV-pumped WLEDs fabricated using this yellow emitter together with a commercial europium-doped barium magnesium aluminate blue phosphor (BaMgAl 10 O 17 :Eu 2+ ) can exhibit high color rendering indexes of up to 85.

  7. A high-speed computerized tomography image reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1983-01-01

    The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)

  8. The role of three-dimensional high-definition laparoscopic surgery for gynaecology.

    Science.gov (United States)

    Usta, Taner A; Gundogdu, Elif C

    2015-08-01

    This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.

  9. Collective excitations and superconductivity in reduced dimensional systems - Possible mechanism for high Tc

    International Nuclear Information System (INIS)

    Santoyo, B.M.

    1989-01-01

    The author studies in full detail a possible mechanism of superconductivity in slender electronic systems of finite cross section. This mechanism is based on the pairing interaction mediated by the multiple modes of acoustic plasmons in these structures. First, he shows that multiple non-Landau-damped acoustic plasmon modes exist for electrons in a quasi-one dimensional wire at finite temperatures. These plasmons are of two basic types. The first one is made up by the collective longitudinal oscillations of the electrons essentially of a given transverse energy level oscillating against the electrons in the neighboring transverse energy level. The modes are called Slender Acoustic Plasmons or SAP's. The other mode is the quasi-one dimensional acoustic plasmon mode in which all the electrons oscillate together in phase among themselves but out of phase against the positive ion background. He shows numerically and argues physically that even for a temperature comparable to the mode separation Δω the SAP's and the quasi-one dimensional plasmon persist. Then, based on a clear physical picture, he develops in terms of the dielectric function a theory of superconductivity capable of treating the simultaneous participation of multiple bosonic modes that mediate the pairing interaction. The effect of mode damping is then incorporated in a simple manner that is free of the encumbrance of the strong-coupling, Green's function formalism usually required for the retardation effect. Explicit formulae including such damping are derived for the critical temperature T c and the energy gap Δ 0 . With those modes and armed with such a formalism, he proceeds to investigate a possible superconducting mechanism for high T c in quasi-one dimensional single-wire and multi-wire systems

  10. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input

  11. Filtering and control of stochastic jump hybrid systems

    CERN Document Server

    Yao, Xiuming; Zheng, Wei Xing

    2016-01-01

    This book presents recent research work on stochastic jump hybrid systems. Specifically, the considered stochastic jump hybrid systems include Markovian jump Ito stochastic systems, Markovian jump linear-parameter-varying (LPV) systems, Markovian jump singular systems, Markovian jump two-dimensional (2-D) systems, and Markovian jump repeated scalar nonlinear systems. Some sufficient conditions are first established respectively for the stability and performances of those kinds of stochastic jump hybrid systems in terms of solution of linear matrix inequalities (LMIs). Based on the derived analysis conditions, the filtering and control problems are addressed. The book presents up-to-date research developments and novel methodologies on stochastic jump hybrid systems. The contents can be divided into two parts: the first part is focused on robust filter design problem, while the second part is put the emphasis on robust control problem. These methodologies provide a framework for stability and performance analy...

  12. The Figured Worlds of High School Science Teachers: Uncovering Three-Dimensional Assessment Decisions

    Science.gov (United States)

    Ewald, Megan

    As a result of recent mandates of the Next Generation Science Standards, assessments are a "system of meaning" amidst a paradigm shift toward three-dimensional assessments. This study is motivated by two research questions: 1) how do high school science teachers describe their processes of decision-making in the development and use of three-dimensional assessments and 2) how do high school science teachers negotiate their identities as assessors in designing three-dimensional assessments. An important factor in teachers' assessment decision making is how they identify themselves as assessors. Therefore, this study investigated the teachers' roles as assessors through the Sociocultural Identity Theory. The most important contribution from this study is the emergent teacher assessment sub-identities: the modifier-recycler , the feeler-finder, and the creator. Using a qualitative phenomenological research design, focus groups, three-series interviews, think-alouds, and document analysis were utilized in this study. These qualitative methods were chosen to elicit rich conversations among teachers, make meaning of the teachers' experiences through in-depth interviews, amplify the thought processes of individual teachers while making assessment decisions, and analyze assessment documents in relation to teachers' perspectives. The findings from this study suggest that--of the 19 participants--only two teachers could consistently be identified as creators and aligned their assessment practices with NGSS. However, assessment sub-identities are not static and teachers may negotiate their identities from one moment to the next within socially constructed realms of interpretation known as figured worlds. Because teachers are positioned in less powerful figured worlds within the dominant discourse of standardization, this study raises awareness as to how the external pressures from more powerful figured worlds socially construct teachers' identities as assessors. For teachers

  13. Simulating three-dimensional nonthermal high-energy photon emission in colliding-wind binaries

    Energy Technology Data Exchange (ETDEWEB)

    Reitberger, K.; Kissmann, R.; Reimer, A.; Reimer, O., E-mail: klaus.reitberger@uibk.ac.at [Institut für Astro- und Teilchenphysik and Institut für Theoretische Physik, Leopold-Franzens-Universität Innsbruck, A-6020 Innsbruck (Austria)

    2014-07-01

    Massive stars in binary systems have long been regarded as potential sources of high-energy γ rays. The emission is principally thought to arise in the region where the stellar winds collide and accelerate relativistic particles which subsequently emit γ rays. On the basis of a three-dimensional distribution function of high-energy particles in the wind collision region—as obtained by a numerical hydrodynamics and particle transport model—we present the computation of the three-dimensional nonthermal photon emission for a given line of sight. Anisotropic inverse Compton emission is modeled using the target radiation field of both stars. Photons from relativistic bremsstrahlung and neutral pion decay are computed on the basis of local wind plasma densities. We also consider photon-photon opacity effects due to the dense radiation fields of the stars. Results are shown for different stellar separations of a given binary system comprising of a B star and a Wolf-Rayet star. The influence of orbital orientation with respect to the line of sight is also studied by using different orbital viewing angles. For the chosen electron-proton injection ratio of 10{sup –2}, we present the ensuing photon emission in terms of two-dimensional projections maps, spectral energy distributions, and integrated photon flux values in various energy bands. Here, we find a transition from hadron-dominated to lepton-dominated high-energy emission with increasing stellar separations. In addition, we confirm findings from previous analytic modeling that the spectral energy distribution varies significantly with orbital orientation.

  14. High-speed three-dimensional plasma temperature determination of axially symmetric free-burning arcs

    International Nuclear Information System (INIS)

    Bachmann, B; Ekkert, K; Bachmann, J-P; Marques, J-L; Schein, J; Kozakov, R; Gött, G; Schöpp, H; Uhrlandt, D

    2013-01-01

    In this paper we introduce an experimental technique that allows for high-speed, three-dimensional determination of electron density and temperature in axially symmetric free-burning arcs. Optical filters with narrow spectral bands of 487.5–488.5 nm and 689–699 nm are utilized to gain two-dimensional spectral information of a free-burning argon tungsten inert gas arc. A setup of mirrors allows one to image identical arc sections of the two spectral bands onto a single camera chip. Two-different Abel inversion algorithms have been developed to reconstruct the original radial distribution of emission coefficients detected with each spectral window and to confirm the results. With the assumption of local thermodynamic equilibrium we calculate emission coefficients as a function of temperature by application of the Saha equation, the ideal gas law, the quasineutral gas condition and the NIST compilation of spectral lines. Ratios of calculated emission coefficients are compared with measured ones yielding local plasma temperatures. In the case of axial symmetry the three-dimensional plasma temperature distributions have been determined at dc currents of 100, 125, 150 and 200 A yielding temperatures up to 20000 K in the hot cathode region. These measurements have been validated by four different techniques utilizing a high-resolution spectrometer at different positions in the plasma. Plasma temperatures show good agreement throughout the different methods. Additionally spatially resolved transient plasma temperatures have been measured of a dc pulsed process employing a high-speed frame rate of 33000 frames per second showing the modulation of the arc isothermals with time and providing information about the sensitivity of the experimental approach. (paper)

  15. Stochastic background of atmospheric cascades

    International Nuclear Information System (INIS)

    Wilk, G.; Wlodarczyk, Z.

    1993-01-01

    Fluctuations in the atmospheric cascades developing during the propagation of very high energy cosmic rays through the atmosphere are investigated using stochastic branching model of pure birth process with immigration. In particular, we show that the multiplicity distributions of secondaries emerging from gamma families are much narrower than those resulting from hadronic families. We argue that the strong intermittent like behaviour found recently in atmospheric families results from the fluctuations in the cascades themselves and are insensitive to the details of elementary interactions

  16. THREE-DIMENSIONAL OBSERVATIONS ON THICK BIOLOGICAL SPECIMENS BY HIGH VOLTAGE ELECTRON MICROSCOPY

    Directory of Open Access Journals (Sweden)

    Tetsuji Nagata

    2011-05-01

    Full Text Available Thick biological specimens prepared as whole mount cultured cells or thick sections from embedded tissues were stained with histochemical reactions, such as thiamine pyrophosphatase, glucose-6-phosphatase, cytochrome oxidase, acid phosphatase, DAB reactions and radioautography, to observe 3-D ultrastructures of cell organelles producing stereo-pairs by high voltage electron microscopy at accerelating voltages of 400-1000 kV. The organelles demonstrated were Golgi apparatus, endoplasmic reticulum, mitochondria, lysosomes, peroxisomes, pinocytotic vesicles and incorporations of radioactive compounds. As the results, those cell organelles were observed 3- dimensionally and the relative relationships between these organelles were demonstrated.

  17. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    Science.gov (United States)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  18. The high exponent limit $p \\to \\infty$ for the one-dimensional nonlinear wave equation

    OpenAIRE

    Tao, Terence

    2009-01-01

    We investigate the behaviour of solutions $\\phi = \\phi^{(p)}$ to the one-dimensional nonlinear wave equation $-\\phi_{tt} + \\phi_{xx} = -|\\phi|^{p-1} \\phi$ with initial data $\\phi(0,x) = \\phi_0(x)$, $\\phi_t(0,x) = \\phi_1(x)$, in the high exponent limit $p \\to \\infty$ (holding $\\phi_0, \\phi_1$ fixed). We show that if the initial data $\\phi_0, \\phi_1$ are smooth with $\\phi_0$ taking values in $(-1,1)$ and obey a mild non-degeneracy condition, then $\\phi$ converges locally uniformly to a piecewis...

  19. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    International Nuclear Information System (INIS)

    Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang

    2016-01-01

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  20. Quasi-two-dimensional metallic hydrogen in diphosphide at a high pressure

    International Nuclear Information System (INIS)

    Degtyarenko, N. N.; Mazur, E. A.

    2016-01-01

    The structural, electronic, phonon, and other characteristics of the normal phases of phosphorus hydrides with stoichiometry PH k are analyzed. The properties of the initial substance, namely, diphosphine are calculated. In contrast to phosphorus hydrides with stoichiometry PH 3 , a quasi-two-dimensional phosphorus-stabilized lattice of metallic hydrogen can be formed in this substance during hydrostatic compression at a high pressure. The formed structure with H–P–H elements is shown to be locally stable in phonon spectrum, i.e., to be metastable. The properties of diphosphine are compared with the properties of similar structures of sulfur hydrides.

  1. Two-dimensional gold nanostructures with high activity for selective oxidation of carbon–hydrogen bonds

    KAUST Repository

    Wang, Liang

    2015-04-22

    Efficient synthesis of stable two-dimensional (2D) noble metal catalysts is a challenging topic. Here we report the facile synthesis of 2D gold nanosheets via a wet chemistry method, by using layered double hydroxide as the template. Detailed characterization with electron microscopy and X-ray photoelectron spectroscopy demonstrates that the nanosheets are negatively charged and [001] oriented with thicknesses varying from single to a few atomic layers. X-ray absorption spectroscopy reveals unusually low gold–gold coordination numbers. These gold nanosheets exhibit high catalytic activity and stability in the solvent-free selective oxidation of carbon–hydrogen bonds with molecular oxygen.

  2. Electric Field Guided Assembly of One-Dimensional Nanostructures for High Performance Sensors

    Directory of Open Access Journals (Sweden)

    Wing Kam Liu

    2012-05-01

    Full Text Available Various nanowire or nanotube-based devices have been demonstrated to fulfill the anticipated future demands on sensors. To fabricate such devices, electric field-based methods have demonstrated a great potential to integrate one-dimensional nanostructures into various forms. This review paper discusses theoretical and experimental aspects of the working principles, the assembled structures, and the unique functions associated with electric field-based assembly. The challenges and opportunities of the assembly methods are addressed in conjunction with future directions toward high performance sensors.

  3. High-dimensional chaos from self-sustained collisions of solitons

    Energy Technology Data Exchange (ETDEWEB)

    Yildirim, O. Ozgur, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Cavium, Inc., 600 Nickerson Rd., Marlborough, Massachusetts 01752 (United States); Ham, Donhee, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Harvard University, 33 Oxford St., Cambridge, Massachusetts 02138 (United States)

    2014-06-16

    We experimentally demonstrate chaos generation based on collisions of electrical solitons on a nonlinear transmission line. The nonlinear line creates solitons, and an amplifier connected to it provides gain to these solitons for their self-excitation and self-sustenance. Critically, the amplifier also provides a mechanism to enable and intensify collisions among solitons. These collisional interactions are of intrinsically nonlinear nature, modulating the phase and amplitude of solitons, thus causing chaos. This chaos generated by the exploitation of the nonlinear wave phenomena is inherently high-dimensional, which we also demonstrate.

  4. Inferring biological tasks using Pareto analysis of high-dimensional data.

    Science.gov (United States)

    Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri

    2015-03-01

    We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.

  5. A novel algorithm of artificial immune system for high-dimensional function numerical optimization

    Institute of Scientific and Technical Information of China (English)

    DU Haifeng; GONG Maoguo; JIAO Licheng; LIU Ruochen

    2005-01-01

    Based on the clonal selection theory and immune memory theory, a novel artificial immune system algorithm, immune memory clonal programming algorithm (IMCPA), is put forward. Using the theorem of Markov chain, it is proved that IMCPA is convergent. Compared with some other evolutionary programming algorithms (like Breeder genetic algorithm), IMCPA is shown to be an evolutionary strategy capable of solving complex machine learning tasks, like high-dimensional function optimization, which maintains the diversity of the population and avoids prematurity to some extent, and has a higher convergence speed.

  6. Three-dimensional propagation and absorption of high frequency Gaussian beams in magnetoactive plasmas

    International Nuclear Information System (INIS)

    Nowak, S.; Orefice, A.

    1994-01-01

    In today's high frequency systems employed for plasma diagnostics, power heating, and current drive the behavior of the wave beams is appreciably affected by the self-diffraction phenomena due to their narrow collimation. In the present article the three-dimensional propagation of Gaussian beams in inhomogeneous and anisotropic media is analyzed, starting from a properly formulated dispersion relation. Particular attention is paid, in the case of electromagnetic electron cyclotron (EC) waves, to the toroidal geometry characterizing tokamak plasmas, to the power density evolution on the advancing wave fronts, and to the absorption features occurring when a beam crosses an EC resonant layer

  7. Computing and visualizing time-varying merge trees for high-dimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  8. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Hoang, Ngoc-Tram D. [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Nguyen, Duy-Anh P. [Department of Natural Science, Thu Dau Mot University, 6, Tran Van On Street, Thu Dau Mot City, Binh Duong Province (Viet Nam); Hoang, Van-Hung [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Le, Van-Hoang, E-mail: levanhoang@tdt.edu.vn [Atomic Molecular and Optical Physics Research Group, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Faculty of Applied Sciences, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam)

    2016-08-15

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  9. Two-dimensional gold nanostructures with high activity for selective oxidation of carbon-hydrogen bonds

    Science.gov (United States)

    Wang, Liang; Zhu, Yihan; Wang, Jian-Qiang; Liu, Fudong; Huang, Jianfeng; Meng, Xiangju; Basset, Jean-Marie; Han, Yu; Xiao, Feng-Shou

    2015-04-01

    Efficient synthesis of stable two-dimensional (2D) noble metal catalysts is a challenging topic. Here we report the facile synthesis of 2D gold nanosheets via a wet chemistry method, by using layered double hydroxide as the template. Detailed characterization with electron microscopy and X-ray photoelectron spectroscopy demonstrates that the nanosheets are negatively charged and [001] oriented with thicknesses varying from single to a few atomic layers. X-ray absorption spectroscopy reveals unusually low gold-gold coordination numbers. These gold nanosheets exhibit high catalytic activity and stability in the solvent-free selective oxidation of carbon-hydrogen bonds with molecular oxygen.

  10. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    Science.gov (United States)

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  11. Quasi-two-dimensional metallic hydrogen in diphosphide at a high pressure

    Energy Technology Data Exchange (ETDEWEB)

    Degtyarenko, N. N.; Mazur, E. A., E-mail: eugen-mazur@mail.ru [National Research Nuclear University MEPhI (Russian Federation)

    2016-08-15

    The structural, electronic, phonon, and other characteristics of the normal phases of phosphorus hydrides with stoichiometry PH{sub k} are analyzed. The properties of the initial substance, namely, diphosphine are calculated. In contrast to phosphorus hydrides with stoichiometry PH{sub 3}, a quasi-two-dimensional phosphorus-stabilized lattice of metallic hydrogen can be formed in this substance during hydrostatic compression at a high pressure. The formed structure with H–P–H elements is shown to be locally stable in phonon spectrum, i.e., to be metastable. The properties of diphosphine are compared with the properties of similar structures of sulfur hydrides.

  12. Sparse learning of stochastic dynamical equations

    Science.gov (United States)

    Boninsegna, Lorenzo; Nüske, Feliks; Clementi, Cecilia

    2018-06-01

    With the rapid increase of available data for complex systems, there is great interest in the extraction of physically relevant information from massive datasets. Recently, a framework called Sparse Identification of Nonlinear Dynamics (SINDy) has been introduced to identify the governing equations of dynamical systems from simulation data. In this study, we extend SINDy to stochastic dynamical systems which are frequently used to model biophysical processes. We prove the asymptotic correctness of stochastic SINDy in the infinite data limit, both in the original and projected variables. We discuss algorithms to solve the sparse regression problem arising from the practical implementation of SINDy and show that cross validation is an essential tool to determine the right level of sparsity. We demonstrate the proposed methodology on two test systems, namely, the diffusion in a one-dimensional potential and the projected dynamics of a two-dimensional diffusion process.

  13. Three-dimensional interconnected porous graphitic carbon derived from rice straw for high performance supercapacitors

    Science.gov (United States)

    Jin, Hong; Hu, Jingpeng; Wu, Shichao; Wang, Xiaolan; Zhang, Hui; Xu, Hui; Lian, Kun

    2018-04-01

    Three-dimensional interconnected porous graphitic carbon materials are synthesized via a combination of graphitization and activation process with rice straw as the carbon source. The physicochemical properties of the three-dimensional interconnected porous graphitic carbon materials are characterized by Nitrogen adsorption/desorption, Fourier-transform infrared spectroscopy, X-ray diffraction, Raman spectroscopy, Scanning electron microscopy and Transmission electron microscopy. The results demonstrate that the as-prepared carbon is a high surface area carbon material (a specific surface area of 3333 m2 g-1 with abundant mesoporous and microporous structures). And it exhibits superb performance in symmetric double layer capacitors with a high specific capacitance of 400 F g-1 at a current density of 0.1 A g-1, good rate performance with 312 F g-1 under a current density of 5 A g-1 and favorable cycle stability with 6.4% loss after 10000 cycles at a current density of 5 A g-1 in the aqueous electrolyte of 6M KOH. Thus, rice straw is a promising carbon source for fabricating inexpensive, sustainable and high performance supercapacitors' electrode materials.

  14. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2017-04-25

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  15. Assessing the detectability of antioxidants in two-dimensional high-performance liquid chromatography.

    Science.gov (United States)

    Bassanese, Danielle N; Conlan, Xavier A; Barnett, Neil W; Stevenson, Paul G

    2015-05-01

    This paper explores the analytical figures of merit of two-dimensional high-performance liquid chromatography for the separation of antioxidant standards. The cumulative two-dimensional high-performance liquid chromatography peak area was calculated for 11 antioxidants by two different methods--the areas reported by the control software and by fitting the data with a Gaussian model; these methods were evaluated for precision and sensitivity. Both methods demonstrated excellent precision in regards to retention time in the second dimension (%RSD below 1.16%) and cumulative second dimension peak area (%RSD below 3.73% from the instrument software and 5.87% for the Gaussian method). Combining areas reported by the high-performance liquid chromatographic control software displayed superior limits of detection, in the order of 1 × 10(-6) M, almost an order of magnitude lower than the Gaussian method for some analytes. The introduction of the countergradient eliminated the strong solvent mismatch between dimensions, leading to a much improved peak shape and better detection limits for quantification. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems

    Directory of Open Access Journals (Sweden)

    DimitrisG. Stavrakoudis

    2012-04-01

    Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.

  17. Stable high efficiency two-dimensional perovskite solar cells via cesium doping

    KAUST Repository

    Zhang, Xu

    2017-08-15

    Two-dimensional (2D) organic-inorganic perovskites have recently emerged as one of the most important thin-film solar cell materials owing to their excellent environmental stability. The remaining major pitfall is their relatively poor photovoltaic performance in contrast to 3D perovskites. In this work we demonstrate cesium cation (Cs) doped 2D (BA)(MA)PbI perovskite solar cells giving a power conversion efficiency (PCE) as high as 13.7%, the highest among the reported 2D devices, with excellent humidity resistance. The enhanced efficiency from 12.3% (without Cs) to 13.7% (with 5% Cs) is attributed to perfectly controlled crystal orientation, an increased grain size of the 2D planes, superior surface quality, reduced trap-state density, enhanced charge-carrier mobility and charge-transfer kinetics. Surprisingly, it is found that the Cs doping yields superior stability for the 2D perovskite solar cells when subjected to a high humidity environment without encapsulation. The device doped using 5% Cs degrades only ca. 10% after 1400 hours of exposure in 30% relative humidity (RH), and exhibits significantly improved stability under heating and high moisture environments. Our results provide an important step toward air-stable and fully printable low dimensional perovskites as a next-generation renewable energy source.

  18. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    International Nuclear Information System (INIS)

    Wang, Yang; Bao, Wan-Su; Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei

    2017-01-01

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  19. Latent class models for joint analysis of disease prevalence and high-dimensional semicontinuous biomarker data.

    Science.gov (United States)

    Zhang, Bo; Chen, Zhen; Albert, Paul S

    2012-01-01

    High-dimensional biomarker data are often collected in epidemiological studies when assessing the association between biomarkers and human disease is of interest. We develop a latent class modeling approach for joint analysis of high-dimensional semicontinuous biomarker data and a binary disease outcome. To model the relationship between complex biomarker expression patterns and disease risk, we use latent risk classes to link the 2 modeling components. We characterize complex biomarker-specific differences through biomarker-specific random effects, so that different biomarkers can have different baseline (low-risk) values as well as different between-class differences. The proposed approach also accommodates data features that are common in environmental toxicology and other biomarker exposure data, including a large number of biomarkers, numerous zero values, and complex mean-variance relationship in the biomarkers levels. A Monte Carlo EM (MCEM) algorithm is proposed for parameter estimation. Both the MCEM algorithm and model selection procedures are shown to work well in simulations and applications. In applying the proposed approach to an epidemiological study that examined the relationship between environmental polychlorinated biphenyl (PCB) exposure and the risk of endometriosis, we identified a highly significant overall effect of PCB concentrations on the risk of endometriosis.

  20. Three-dimensional laparoscopy vs 2-dimensional laparoscopy with high-definition technology for abdominal surgery: a systematic review.

    Science.gov (United States)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian; Kildebro, Niels; Rosenberg, Jacob

    2017-01-01

    This systematic review investigates newer generation 3-dimensional (3D) laparoscopy vs 2-dimensional (2D) laparoscopy in terms of error rating, performance time, and subjective assessment as early comparisons have shown contradictory results due to technological shortcomings. This systematic review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Randomized controlled trials (RCTs) comparing newer generation 3D-laparoscopy with 2D-laparoscopy were included through searches in Pubmed, EMBASE, and Cochrane Central Register of Controlled Trials database. Of 643 articles, 13 RCTs were included, of which 2 were clinical trials. Nine of 13 trials (69%) and 10 of 13 trials (77%) found a significant reduction in performance time and error, respectively, with the use of 3D-laparoscopy. Overall, 3D-laparoscopy was found to be superior or equal to 2D-laparoscopy. All trials featuring subjective evaluation found a superiority of 3D-laparoscopy. More clinical RCTs are still awaited for the convincing results to be reproduced. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  2. Remarks on stochastic acceleration

    International Nuclear Information System (INIS)

    Graeff, P.

    1982-12-01

    Stochastic acceleration and turbulent diffusion are strong turbulence problems since no expansion parameter exists. Hence the problem of finding rigorous results is of major interest both for checking approximations and for reference models. Since we have found a way of constructing such models in the turbulent diffusion case the question of the extension to stochastic acceleration now arises. The paper offers some possibilities illustrated by the case of 'stochastic free fall' which may be particularly interesting in the context of linear response theory. (orig.)

  3. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J. [Strasbourg-1 Univ., 67 (France). Centre de Recherches Nucleaires

    1992-12-31

    A new generation of high resolution {gamma}-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold {gamma}-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs.

  4. Three-dimensional bicontinuous nanoporous Au/polyaniline hybrid films for high-performance electrochemical supercapacitors

    Science.gov (United States)

    Lang, Xingyou; Zhang, Ling; Fujita, Takeshi; Ding, Yi; Chen, Mingwei

    2012-01-01

    We report three-dimensional bicontinuous nanoporous Au/polyaniline (PANI) composite films made by one-step electrochemical polymerization of PANI shell onto dealloyed nanoporous gold (NPG) skeletons for the applications in electrochemical supercapacitors. The NPG/PANI based supercapacitors exhibit ultrahigh volumetric capacitance (∼1500 F cm-3) and energy density (∼0.078 Wh cm-3), which are seven and four orders of magnitude higher than these of electrolytic capacitors, with the same power density up to ∼190 W cm-3. The outstanding capacitive performances result from a novel nanoarchitecture in which pseudocapacitive PANI shells are incorporated into pore channels of highly conductive NPG, making them promising candidates as electrode materials in supercapacitor devices combing high-energy storage densities with high-power delivery.

  5. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.

    1992-01-01

    A new generation of high resolution γ-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold γ-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs

  6. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    International Nuclear Information System (INIS)

    Liu Jizhi; Chen Xingbi

    2009-01-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  7. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)

    2009-12-15

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  8. High-efficiency one-dimensional atom localization via two parallel standing-wave fields

    International Nuclear Information System (INIS)

    Wang, Zhiping; Wu, Xuqiang; Lu, Liang; Yu, Benli

    2014-01-01

    We present a new scheme of high-efficiency one-dimensional (1D) atom localization via measurement of upper state population or the probe absorption in a four-level N-type atomic system. By applying two classical standing-wave fields, the localization peak position and number, as well as the conditional position probability, can be easily controlled by the system parameters, and the sub-half-wavelength atom localization is also observed. More importantly, there is 100% detecting probability of the atom in the subwavelength domain when the corresponding conditions are satisfied. The proposed scheme may open up a promising way to achieve high-precision and high-efficiency 1D atom localization. (paper)

  9. High-resolution and high-throughput multichannel Fourier transform spectrometer with two-dimensional interferogram warping compensation

    Science.gov (United States)

    Watanabe, A.; Furukawa, H.

    2018-04-01

    The resolution of multichannel Fourier transform (McFT) spectroscopy is insufficient for many applications despite its extreme advantage of high throughput. We propose an improved configuration to realise both performance using a two-dimensional area sensor. For the spectral resolution, we obtained the interferogram of a larger optical path difference by shifting the area sensor without altering any optical components. The non-linear phase error of the interferometer was successfully corrected using a phase-compensation calculation. Warping compensation was also applied to realise a higher throughput to accumulate the signal between vertical pixels. Our approach significantly improved the resolution and signal-to-noise ratio by factors of 1.7 and 34, respectively. This high-resolution and high-sensitivity McFT spectrometer will be useful for detecting weak light signals such as those in non-invasive diagnosis.

  10. Stochastic stability of four-wheel-steering system

    International Nuclear Information System (INIS)

    Huang Dongwei; Wang Hongli; Zhu Zhiwen; Feng Zhang

    2007-01-01

    A four-wheel-steering system subjected to white noise excitations was reduced to a two-degree-of-freedom quasi-non-integrable-Hamiltonian system. Subsequently we obtained an one-dimensional Ito stochastic differential equation for the averaged Hamiltonian of the system by using the stochastic averaging method for quasi-non-integrable-Hamiltonian systems. Thus, the stochastic stability of four-wheel-steering system was analyzed by analyzing the sample behaviors of the averaged Hamiltonian at the boundary H = 0 and calculating its Lyapunov exponent. An example given at the end demonstrated that the conclusion obtained is of considerable significance

  11. Monte Carlo simulation of fully Markovian stochastic geometries

    International Nuclear Information System (INIS)

    Lepage, Thibaut; Delaby, Lucie; Malvagi, Fausto; Mazzolo, Alain

    2010-01-01

    The interest in resolving the equation of transport in stochastic media has continued to increase these last years. For binary stochastic media it is often assumed that the geometry is Markovian, which is never the case in usual environments. In the present paper, based on rigorous mathematical theorems, we construct fully two-dimensional Markovian stochastic geometries and we study their main properties. In particular, we determine a percolation threshold p c , equal to 0.586 ± 0.0015 for such geometries. Finally, Monte Carlo simulations are performed through these geometries and the results compared to homogeneous geometries. (author)

  12. Stochastic gene expression in Arabidopsis thaliana.

    Science.gov (United States)

    Araújo, Ilka Schultheiß; Pietsch, Jessica Magdalena; Keizer, Emma Mathilde; Greese, Bettina; Balkunde, Rachappa; Fleck, Christian; Hülskamp, Martin

    2017-12-14

    Although plant development is highly reproducible, some stochasticity exists. This developmental stochasticity may be caused by noisy gene expression. Here we analyze the fluctuation of protein expression in Arabidopsis thaliana. Using the photoconvertible KikGR marker, we show that the protein expressions of individual cells fluctuate over time. A dual reporter system was used to study extrinsic and intrinsic noise of marker gene expression. We report that extrinsic noise is higher than intrinsic noise and that extrinsic noise in stomata is clearly lower in comparison to several other tissues/cell types. Finally, we show that cells are coupled with respect to stochastic protein expression in young leaves, hypocotyls and roots but not in mature leaves. Our data indicate that stochasticity of gene expression can vary between tissues/cell types and that it can be coupled in a non-cell-autonomous manner.

  13. AucPR: An AUC-based approach using penalized regression for disease prediction with high-dimensional omics data

    OpenAIRE

    Yu, Wenbao; Park, Taesung

    2014-01-01

    Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...

  14. Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.

    Science.gov (United States)

    Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young

    2017-03-14

    Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.

  15. American option pricing with stochastic volatility processes

    Directory of Open Access Journals (Sweden)

    Ping LI

    2017-12-01

    Full Text Available In order to solve the problem of option pricing more perfectly, the option pricing problem with Heston stochastic volatility model is considered. The optimal implementation boundary of American option and the conditions for its early execution are analyzed and discussed. In view of the fact that there is no analytical American option pricing formula, through the space discretization parameters, the stochastic partial differential equation satisfied by American options with Heston stochastic volatility is transformed into the corresponding differential equations, and then using high order compact finite difference method, numerical solutions are obtained for the option price. The numerical experiments are carried out to verify the theoretical results and simulation. The two kinds of optimal exercise boundaries under the conditions of the constant volatility and the stochastic volatility are compared, and the results show that the optimal exercise boundary also has stochastic volatility. Under the setting of parameters, the behavior and the nature of volatility are analyzed, the volatility curve is simulated, the calculation results of high order compact difference method are compared, and the numerical option solution is obtained, so that the method is verified. The research result provides reference for solving the problems of option pricing under stochastic volatility such as multiple underlying asset option pricing and barrier option pricing.

  16. High-dimensional free-space optical communications based on orbital angular momentum coding

    Science.gov (United States)

    Zou, Li; Gu, Xiaofan; Wang, Le

    2018-03-01

    In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.

  17. On the sensitivity of dimensional stability of high density polyethylene on heating rate

    Directory of Open Access Journals (Sweden)

    2007-02-01

    Full Text Available Although high density polyethylene (HDPE is one of the most widely used industrial polymers, its application compared to its potential has been limited because of its low dimensional stability particularly at high temperature. Dilatometry test is considered as a method for examining thermal dimensional stability (TDS of the material. In spite of the importance of simulation of TDS of HDPE during dilatometry test it has not been paid attention by other investigators. Thus the main goal of this research is concentrated on simulation of TDS of HDPE. Also it has been tried to validate the simulation results and practical experiments. For this purpose the standard dilatometry test was done on the HDPE speci­mens. Secant coefficient of linear thermal expansion was computed from the test. Then by considering boundary conditions and material properties, dilatometry test has been simulated at different heating rates and the thermal strain versus temper­ature was calculated. The results showed that the simulation results and practical experiments were very close together.

  18. Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate

    Directory of Open Access Journals (Sweden)

    Seokhoon Kim

    2015-01-01

    Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.

  19. The validation and assessment of machine learning: a game of prediction from high-dimensional data.

    Directory of Open Access Journals (Sweden)

    Tune H Pers

    Full Text Available In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often implies that multiple methods are tested and compared on the same set of data. This is particularly difficult in situations that are prone to over-fitting where the number of subjects is low compared to the number of potential predictors. The article presents a game which provides some grounds for conducting a fair model comparison. Each player selects a modeling strategy for predicting individual response from potential predictors. A strictly proper scoring rule, bootstrap cross-validation, and a set of rules are used to make the results obtained with different strategies comparable. To illustrate the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively.

  20. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.