WorldWideScience

Sample records for high dimensional experiment

  1. Three-dimensional triplet tracking for LHC and future high rate experiments

    International Nuclear Information System (INIS)

    Schöning, A

    2014-01-01

    The hit combinatorial problem is a main challenge for track reconstruction and triggering at high rate experiments. At hadron colliders the dominant fraction of hits is due to low momentum tracks for which multiple scattering (MS) effects dominate the hit resolution. MS is also the dominating source for hit confusion and track uncertainties in low energy precision experiments. In all such environments, where MS dominates, track reconstruction and fitting can be largely simplified by using three-dimensional (3D) hit-triplets as provided by pixel detectors. This simplification is possible since track uncertainties are solely determined by MS if high precision spatial information is provided. Fitting of hit-triplets is especially simple for tracking detectors in solenoidal magnetic fields. The over-constrained 3D-triplet method provides a complete set of track parameters and is robust against fake hit combinations. Full tracks can be reconstructed step-wise by connecting hit triplet combinations from different layers, thus heavily reducing the combinatorial problem and accelerating track linking. The triplet method is ideally suited for pixel detectors where hits can be treated as 3D-space points. With the advent of relatively cheap and industrially available CMOS-sensors the construction of highly granular full scale pixel tracking detectors seems to be possible also for experiments at LHC or future high energy (hadron) colliders. In this paper tracking performance studies for full-scale pixel detectors, including their optimisation for 3D-triplet tracking, are presented. The results obtained for different types of tracker geometries and different reconstruction methods are compared. The potential of reducing the number of tracking layers and - along with that - the material budget using this new tracking concept is discussed. The possibility of using 3D-triplet tracking for triggering and fast online reconstruction is highlighted

  2. Three-Dimensional Triplet Tracking for LHC and Future High Rate Experiments

    CERN Document Server

    Schöning, Andre

    2014-10-20

    The hit combinatorial problem is a main challenge for track reconstruction and triggering at high rate experiments. At hadron colliders the dominant fraction of hits is due to low momentum tracks for which multiple scattering (MS) effects dominate the hit resolution. MS is also the dominating source for hit confusion and track uncertainties in low energy precision experiments. In all such environments, where MS dominates, track reconstruction and fitting can be largely simplified by using three-dimensional (3D) hit-triplets as provided by pixel detectors. This simplification is possible since track uncertainties are solely determined by MS if high precision spatial information is provided. Fitting of hit-triplets is especially simple for tracking detectors in solenoidal magnetic fields. The over-constrained 3D-triplet method provides a complete set of track parameters and is robust against fake hit combinations. The triplet method is ideally suited for pixel detectors where hits can be treated as 3D-space poi...

  3. Detailed high-resolution three-dimensional simulations of OMEGA separated reactants inertial confinement fusion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Haines, Brian M., E-mail: bmhaines@lanl.gov; Fincke, James R.; Shah, Rahul C.; Boswell, Melissa; Fowler, Malcolm M.; Gore, Robert A.; Hayes-Sterbenz, Anna C.; Jungman, Gerard; Klein, Andreas; Rundberg, Robert S.; Steinkamp, Michael J.; Wilhelmy, Jerry B. [Los Alamos National Laboratory, MS T087, Los Alamos, New Mexico 87545 (United States); Grim, Gary P. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Forrest, Chad J.; Silverstein, Kevin; Marshall, Frederic J. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States)

    2016-07-15

    We present results from the comparison of high-resolution three-dimensional (3D) simulations with data from the implosions of inertial confinement fusion capsules with separated reactants performed on the OMEGA laser facility. Each capsule, referred to as a “CD Mixcap,” is filled with tritium and has a polystyrene (CH) shell with a deuterated polystyrene (CD) layer whose burial depth is varied. In these implosions, fusion reactions between deuterium and tritium ions can occur only in the presence of atomic mix between the gas fill and shell material. The simulations feature accurate models for all known experimental asymmetries and do not employ any adjustable parameters to improve agreement with experimental data. Simulations are performed with the RAGE radiation-hydrodynamics code using an Implicit Large Eddy Simulation (ILES) strategy for the hydrodynamics. We obtain good agreement with the experimental data, including the DT/TT neutron yield ratios used to diagnose mix, for all burial depths of the deuterated shell layer. Additionally, simulations demonstrate good agreement with converged simulations employing explicit models for plasma diffusion and viscosity, suggesting that the implicit sub-grid model used in ILES is sufficient to model these processes in these experiments. In our simulations, mixing is driven by short-wavelength asymmetries and longer-wavelength features are responsible for developing flows that transport mixed material towards the center of the hot spot. Mix material transported by this process is responsible for most of the mix (DT) yield even for the capsule with a CD layer adjacent to the tritium fuel. Consistent with our previous results, mix does not play a significant role in TT neutron yield degradation; instead, this is dominated by the displacement of fuel from the center of the implosion due to the development of turbulent instabilities seeded by long-wavelength asymmetries. Through these processes, the long

  4. High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.

    Science.gov (United States)

    Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton

    2017-11-03

    Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.

  5. Clustering high dimensional data

    DEFF Research Database (Denmark)

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  6. A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.

    Science.gov (United States)

    Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H

    2016-06-01

    Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost.

  7. Three-dimensional simulations of low foot and high foot implosion experiments on the National Ignition Facility

    International Nuclear Information System (INIS)

    Clark, D. S.; Weber, C. R.; Milovich, J. L.; Salmonson, J. D.; Kritcher, A. L.; Haan, S. W.; Hammel, B. A.; Hinkel, D. E.; Hurricane, O. A.; Jones, O. S.; Marinak, M. M.; Patel, P. K.; Robey, H. F.; Sepke, S. M.; Edwards, M. J.

    2016-01-01

    In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) [E. I. Moses et al., Phys. Plasmas 16, 041006 (2009)] require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensional (3D) character of the flow, accurately modeling NIF implosions remains at the edge of current simulation capabilities. This paper describes the current state of progress of 3D capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. For both implosion types, the simulations show reasonable, though not perfect, agreement with the data and suggest that a reliable predictive capability is developing to guide future implosions toward ignition.

  8. Three-dimensional simulations of low foot and high foot implosion experiments on the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Clark, D. S.; Weber, C. R.; Milovich, J. L.; Salmonson, J. D.; Kritcher, A. L.; Haan, S. W.; Hammel, B. A.; Hinkel, D. E.; Hurricane, O. A.; Jones, O. S.; Marinak, M. M.; Patel, P. K.; Robey, H. F.; Sepke, S. M.; Edwards, M. J. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94550 (United States)

    2016-05-15

    In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) [E. I. Moses et al., Phys. Plasmas 16, 041006 (2009)] require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensional (3D) character of the flow, accurately modeling NIF implosions remains at the edge of current simulation capabilities. This paper describes the current state of progress of 3D capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. For both implosion types, the simulations show reasonable, though not perfect, agreement with the data and suggest that a reliable predictive capability is developing to guide future implosions toward ignition.

  9. High dimensional entanglement

    CSIR Research Space (South Africa)

    Mc

    2012-07-01

    Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...

  10. High-resolution nuclear magnetic resonance measurements in inhomogeneous magnetic fields: A fast two-dimensional J-resolved experiment

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Yuqing; Cai, Shuhui; Yang, Yu; Sun, Huijun; Lin, Yanqin, E-mail: linyq@xmu.edu.cn, E-mail: chenz@xmu.edu.cn; Chen, Zhong, E-mail: linyq@xmu.edu.cn, E-mail: chenz@xmu.edu.cn [Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, State Key Laboratory for Physical Chemistry of Solid Surfaces, Xiamen University, Xiamen 361005 (China); Lin, Yung-Ya [Department of Chemistry and Biochemistry, University of California, Los Angeles, California 90095 (United States)

    2016-03-14

    High spectral resolution in nuclear magnetic resonance (NMR) is a prerequisite for achieving accurate information relevant to molecular structures and composition assignments. The continuous development of superconducting magnets guarantees strong and homogeneous static magnetic fields for satisfactory spectral resolution. However, there exist circumstances, such as measurements on biological tissues and heterogeneous chemical samples, where the field homogeneity is degraded and spectral line broadening seems inevitable. Here we propose an NMR method, named intermolecular zero-quantum coherence J-resolved spectroscopy (iZQC-JRES), to face the challenge of field inhomogeneity and obtain desired high-resolution two-dimensional J-resolved spectra with fast acquisition. Theoretical analyses for this method are given according to the intermolecular multiple-quantum coherence treatment. Experiments on (a) a simple chemical solution and (b) an aqueous solution of mixed metabolites under externally deshimmed fields, and on (c) a table grape sample with intrinsic field inhomogeneity from magnetic susceptibility variations demonstrate the feasibility and applicability of the iZQC-JRES method. The application of this method to inhomogeneous chemical and biological samples, maybe in vivo samples, appears promising.

  11. Mining High-Dimensional Data

    Science.gov (United States)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  12. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  13. Continuation of full-scale three-dimensional numerical experiments on high-intensity particle and laser beam-matter interactions

    Energy Technology Data Exchange (ETDEWEB)

    Mori, Warren, B.

    2012-12-01

    We present results from the grant entitled, Continuation of full-scale three-dimensional numerical experiments on high-intensity particle and laser beam-matter interactions. The research significantly advanced the understanding of basic high-energy density science (HEDS) on ultra intense laser and particle beam plasma interactions. This advancement in understanding was then used to to aid in the quest to make 1 GeV to 500 GeV plasma based accelerator stages. The work blended basic research with three-dimensions fully nonlinear and fully kinetic simulations including full-scale modeling of ongoing or planned experiments. The primary tool was three-dimensional particle-in-cell simulations. The simulations provided a test bed for theoretical ideas and models as well as a method to guide experiments. The research also included careful benchmarking of codes against experiment. High-fidelity full-scale modeling provided a means to extrapolate parameters into regimes that were not accessible to current or near term experiments, thereby allowing concepts to be tested with confidence before tens to hundreds of millions of dollars were spent building facilities. The research allowed the development of a hierarchy of PIC codes and diagnostics that is one of the most advanced in the world.

  14. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  15. Importance of two-dimensional effects for the generation of ultra high pressures obtained in laser colliding foil experiments

    Energy Technology Data Exchange (ETDEWEB)

    Faral, B.; Fabbro, R. (Laboratoire d' Utilisation des Lasers Intenses, Ecole Polytechnique, 91128 Palaiseau Cedex, (France)); Virmont, J. (Laboratoire de Physique des Milieux Ionises, Ecole Polytechnique, 91128 Palaiseau Cedex, (France)); Cottet, F.; Romain, J.P. (Laboratoire d' Energetique et de Detonique, Ecole Nationale Superieure de Mecanique et d' Aerotechnique, 86034 Poitiers, (France)); Pepin, H. (Institut National de la Recherche Scientifique Energie, Montreal, (Canada))

    1990-02-01

    A 12 {mu}m polyester foil is accelerated by a 0.26 {mu}m wavelength laser and collides with a 15 {mu}m thick molybdenum foil. The accelerating pressure is 45 Mbar (laser intensity{approx}3-- 4{times}10{sup 14} W/cm{sup 2}) and gives to the polyester foil a velocity of about 160 km/sec. The measurement of the shock pressure induced in the impacted foil is made with an improved step technique. When the initial spacing between the two foils is too large compared to the focal spot radius, i.e., larger than 20--30 {mu}m, the different experimental results cannot be reproduced with one-dimensional simulations; this is only possible by using a two-dimensional Lagrangian code that has been developed and that takes into account the strong deformation of the accelerated foil. Finally, even with the low level of x-ray heating due to the ablation plasma, multihundred megabar pressures can be obtained within a very short time.

  16. Importance of two-dimensional effects for the generation of ultra high pressures obtained in laser colliding foil experiments

    International Nuclear Information System (INIS)

    Faral, B.; Fabbro, R.; Virmont, J.; Cottet, F.; Romain, J.P.; Pepin, H.

    1990-01-01

    A 12 μm polyester foil is accelerated by a 0.26 μm wavelength laser and collides with a 15 μm thick molybdenum foil. The accelerating pressure is 45 Mbar (laser intensity∼3-- 4x10 14 W/cm 2 ) and gives to the polyester foil a velocity of about 160 km/sec. The measurement of the shock pressure induced in the impacted foil is made with an improved step technique. When the initial spacing between the two foils is too large compared to the focal spot radius, i.e., larger than 20--30 μm, the different experimental results cannot be reproduced with one-dimensional simulations; this is only possible by using a two-dimensional Lagrangian code that has been developed and that takes into account the strong deformation of the accelerated foil. Finally, even with the low level of x-ray heating due to the ablation plasma, multihundred megabar pressures can be obtained within a very short time

  17. Reduced dimensionality (3,2)D NMR experiments and their automated analysis: implications to high-throughput structural studies on proteins.

    Science.gov (United States)

    Reddy, Jithender G; Kumar, Dinesh; Hosur, Ramakrishna V

    2015-02-01

    Protein NMR spectroscopy has expanded dramatically over the last decade into a powerful tool for the study of their structure, dynamics, and interactions. The primary requirement for all such investigations is sequence-specific resonance assignment. The demand now is to obtain this information as rapidly as possible and in all types of protein systems, stable/unstable, soluble/insoluble, small/big, structured/unstructured, and so on. In this context, we introduce here two reduced dimensionality experiments – (3,2)D-hNCOcanH and (3,2)D-hNcoCAnH – which enhance the previously described 2D NMR-based assignment methods quite significantly. Both the experiments can be recorded in just about 2-3 h each and hence would be of immense value for high-throughput structural proteomics and drug discovery research. The applicability of the method has been demonstrated using alpha-helical bovine apo calbindin-D9k P43M mutant (75 aa) protein. Automated assignment of this data using AUTOBA has been presented, which enhances the utility of these experiments. The backbone resonance assignments so derived are utilized to estimate secondary structures and the backbone fold using Web-based algorithms. Taken together, we believe that the method and the protocol proposed here can be used for routine high-throughput structural studies of proteins. Copyright © 2014 John Wiley & Sons, Ltd.

  18. High-Dimensional Metrics in R

    OpenAIRE

    Chernozhukov, Victor; Hansen, Chris; Spindler, Martin

    2016-01-01

    The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...

  19. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando

    2017-01-01

    aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel

  20. High dimensional neurocomputing growth, appraisal and applications

    CERN Document Server

    Tripathi, Bipin Kumar

    2015-01-01

    The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...

  1. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  2. Clustering high dimensional data using RIA

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.

  3. Highly conducting one-dimensional solids

    CERN Document Server

    Evrard, Roger; Doren, Victor

    1979-01-01

    Although the problem of a metal in one dimension has long been known to solid-state physicists, it was not until the synthesis of real one-dimensional or quasi-one-dimensional systems that this subject began to attract considerable attention. This has been due in part to the search for high­ temperature superconductivity and the possibility of reaching this goal with quasi-one-dimensional substances. A period of intense activity began in 1973 with the report of a measurement of an apparently divergent conduc­ tivity peak in TfF-TCNQ. Since then a great deal has been learned about quasi-one-dimensional conductors. The emphasis now has shifted from trying to find materials of very high conductivity to the many interesting problems of physics and chemistry involved. But many questions remain open and are still under active investigation. This book gives a review of the experimental as well as theoretical progress made in this field over the last years. All the chapters have been written by scientists who have ...

  4. HSM: Heterogeneous Subspace Mining in High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Seidl, Thomas

    2009-01-01

    Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...

  5. Introduction to high-dimensional statistics

    CERN Document Server

    Giraud, Christophe

    2015-01-01

    Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha

  6. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  7. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  8. Topology of high-dimensional manifolds

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)

    2002-08-15

    The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.

  9. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan

    2017-03-27

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  10. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert; Ombao, Hernando

    2017-01-01

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  11. Dimensional analysis of small-scale steam explosion experiments

    International Nuclear Information System (INIS)

    Huh, K.; Corradini, M.L.

    1986-01-01

    Dimensional analysis applied to Nelson's small-scale steam explosion experiments to determine the qualitative effect of each relevant parameter for triggering a steam explosion. According to experimental results, the liquid entrapment model seems to be a consistent explanation for the steam explosion triggering mechanism. The three-dimensional oscillatory wave motion of the vapor/liquid interface is analyzed to determine the necessary conditions for local condensation and production of a coolant microjet to be entrapped in fuel. It is proposed that different contact modes between fuel and coolant may involve different initiation mechanisms of steam explosions

  12. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  13. High-dimensional quantum cryptography with twisted light

    International Nuclear Information System (INIS)

    Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J

    2015-01-01

    Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)

  14. Three-dimensional simulations of Nova capsule implosion experiments

    International Nuclear Information System (INIS)

    Marinak, M.M.; Tipton, R.E.; Landen, O.L.

    1995-01-01

    Capsule implosion experiments carried out on the Nova laser are simulated with the three-dimensional HYDRA radiation hydrodynamics code. Simulations of ordered near single mode perturbations indicate that structures which evolve into round spikes can penetrate farthest into the hot spot. Bubble-shaped perturbations can burn through the capsule shell fastest, however, causing even more damage. Simulations of a capsule with multimode perturbations shows spike amplitudes evolving in good agreement with a saturation model during the deceleration phase. The presence of sizable low mode asymmetry, caused either by drive asymmetry or perturbations in the capsule shell, can dramatically affect the manner in which spikes approach the center of the hot spot. Three-dimensional coupling between the low mode shell perturbations intrinsic to Nova capsules and the drive asymmetry brings the simulated yields into closer agreement with the experimental values

  15. Non-dimensional scaling of impact fast ignition experiments

    International Nuclear Information System (INIS)

    Farley, D R; Shigemori, K; Murakami, M; Azechi, H

    2008-01-01

    Recent experiments at the Osaka University Institute for Laser Engineering (ILE) showed that 'Impact Fast Ignition' (IFI) could increase the neutron yield of inertial fusion targets by two orders of magnitude [1]. IFI utilizes the thermal and kinetic energy of a laser-accelerated disk to impact an imploded fusion target. ILE researchers estimate a disk velocity of 10 8 cm/sec is needed to ignite the fusion target [2]. To be able to study the IFI concept using lasers different from that at ILE, appropriate non-dimensionalization of the flow should be done. Analysis of the rocket equation gives parameters needed for producing similar IFI results with different lasers. This analysis shows that a variety of laboratory-scale commercial lasers could produce results useful to full-scale ILE experiments

  16. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  17. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  18. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  19. Experiment and simulation on one-dimensional plasma photonic crystals

    International Nuclear Information System (INIS)

    Zhang, Lin; Ouyang, Ji-Ting

    2014-01-01

    The transmission characteristics of microwaves passing through one-dimensional plasma photonic crystals (PPCs) have been investigated by experiment and simulation. The PPCs were formed by a series of discharge tubes filled with argon at 5 Torr that the plasma density in tubes can be varied by adjusting the discharge current. The transmittance of X-band microwaves through the crystal structure was measured under different discharge currents and geometrical parameters. The finite-different time-domain method was employed to analyze the detailed properties of the microwaves propagation. The results show that there exist bandgaps when the plasma is turned on. The properties of bandgaps depend on the plasma density and the geometrical parameters of the PPCs structure. The PPCs can perform as dynamical band-stop filter to control the transmission of microwaves within a wide frequency range

  20. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  1. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    Science.gov (United States)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  2. NMR experiments on a three-dimensional vibrofluidized granular medium

    Science.gov (United States)

    Huan, Chao; Yang, Xiaoyu; Candela, D.; Mair, R. W.; Walsworth, R. L.

    2004-04-01

    A three-dimensional granular system fluidized by vertical container vibrations was studied using pulsed field gradient NMR coupled with one-dimensional magnetic resonance imaging. The system consisted of mustard seeds vibrated vertically at 50 Hz, and the number of layers Nl⩽4 was sufficiently low to achieve a nearly time-independent granular fluid. Using NMR, the vertical profiles of density and granular temperature were directly measured, along with the distributions of vertical and horizontal grain velocities. The velocity distributions showed modest deviations from Maxwell-Boltzmann statistics, except for the vertical velocity distribution near the sample bottom, which was highly skewed and non-Gaussian. Data taken for three values of Nl and two dimensionless accelerations Γ=15,18 were fitted to a hydrodynamic theory, which successfully models the density and temperature profiles away from the vibrating container bottom. A temperature inversion near the free upper surface is observed, in agreement with predictions based on the hydrodynamic parameter μ which is nonzero only in inelastic systems.

  3. Design guidelines for high dimensional stability of CFRP optical bench

    Science.gov (United States)

    Desnoyers, Nichola; Boucher, Marc-André; Goyette, Philippe

    2013-09-01

    In carbon fiber reinforced plastic (CFRP) optomechanical structures, particularly when embodying reflective optics, angular stability is critical. Angular stability or warping stability is greatly affected by moisture absorption and thermal gradients. Unfortunately, it is impossible to achieve the perfect laminate and there will always be manufacturing errors in trying to reach a quasi-iso laminate. Some errors, such as those related to the angular position of each ply and the facesheet parallelism (for a bench) can be easily monitored in order to control the stability more adequately. This paper presents warping experiments and finite-element analyses (FEA) obtained from typical optomechanical sandwich structures. Experiments were done using a thermal vacuum chamber to cycle the structures from -40°C to 50°C. Moisture desorption tests were also performed for a number of specific configurations. The selected composite material for the study is the unidirectional prepreg from Tencate M55J/TC410. M55J is a high modulus fiber and TC410 is a new-generation cyanate ester designed for dimensionally stable optical benches. In the studied cases, the main contributors were found to be: the ply angular errors, laminate in-plane parallelism (between 0° ply direction of both facesheets), fiber volume fraction tolerance and joints. Final results show that some tested configurations demonstrated good warping stability. FEA and measurements are in good agreement despite the fact that some defects or fabrication errors remain unpredictable. Design guidelines to maximize the warping stability by taking into account the main dimensional stability contributors, the bench geometry and the optical mount interface are then proposed.

  4. High velocity impact experiment (HVIE)

    Energy Technology Data Exchange (ETDEWEB)

    Toor, A.; Donich, T.; Carter, P.

    1998-02-01

    The HVIE space project was conceived as a way to measure the absolute EOS for approximately 10 materials at pressures up to {approximately}30 Mb with order-of-magnitude higher accuracy than obtainable in any comparable experiment conducted on earth. The experiment configuration is such that each of the 10 materials interacts with all of the others thereby producing one-hundred independent, simultaneous EOS experiments The materials will be selected to provide critical information to weapons designers, National Ignition Facility target designers and planetary and geophysical scientists. In addition, HVIE will provide important scientific information to other communities, including the Ballistic Missile Defense Organization and the lethality and vulnerability community. The basic HVIE concept is to place two probes in counter rotating, highly elliptical orbits and collide them at high velocity (20 km/s) at 100 km altitude above the earth. The low altitude of the experiment will provide quick debris strip-out of orbit due to atmospheric drag. The preliminary conceptual evaluation of the HVIE has found no show stoppers. The design has been very easy to keep within the lift capabilities of commonly available rides to low earth orbit including the space shuttle. The cost of approximately 69 million dollars for 100 EOS experiment that will yield the much needed high accuracy, absolute measurement data is a bargain!

  5. Distribution of high-dimensional entanglement via an intra-city free-space link.

    Science.gov (United States)

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-07-24

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.

  6. High-dimensional data in economics and their (robust) analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  7. High-dimensional Data in Economics and their (Robust) Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  8. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  9. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    Science.gov (United States)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  10. High-dimensional orbital angular momentum entanglement concentration based on Laguerre–Gaussian mode selection

    International Nuclear Information System (INIS)

    Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang

    2013-01-01

    Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)

  11. Orsay: High-gradient experiment

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    Maintaining the tradition of its contribution to the LEP Injector Linac (LIL), Orsay's Linear Accelerator Laboratory (LAL) is carrying out an R&D programme entitled 'New accelerator physics experiments at LAL' (NEPAL). The aim is to contribute to the long-term development of high energy electron-positron linear colliders, where progress can be of short-term benefit both to conventional accelerators and to injectors in rings or free-electron lasers

  12. High beta experiments in CHS

    International Nuclear Information System (INIS)

    Okamura, S.; Matsuoka, K.; Nishimura, K.

    1994-09-01

    High beta experiments were performed in the low-aspect-ratio helical device CHS with the volume-averaged equilibrium beta up to 2.1 %. These values (highest for helical systems) are obtained for high density plasmas in low magnetic field heated with two tangential neutral beams. Confinement improvement given by means of turning off gas puffing helped significantly to make high betas. Magnetic fluctuations increased with increasing beta, but finally stopped to increase in the beta range > 1 %. The coherent modes appearing in the magnetic hill region showed strong dependence on the beta values. The dynamic poloidal field control was applied to suppress the outward plasma movement with the plasma pressure. Such an operation gave fixed boundary operations of high beta plasmas in helical systems. (author)

  13. Image Making in Two Dimensional Art; Experiences with Straw and ...

    African Journals Online (AJOL)

    Image making in art is professionally referred to as bust in Sculpture andPortraiture in Painting. ... havebeen used to achieve these forms of art; like clay cement, marble, stone,different metals and, fibre glass in the three dimensional form; We also have Pencil, Charcoal Pastel and, Acrylic oil-paint in two dimensional form.

  14. Image Making in Two Dimensional Art; Experiences with Straw and ...

    African Journals Online (AJOL)

    Image making in art is professionally referred to as bust in Sculpture andPortraiture in Painting. It is an art form executed in three dimensional (3D)and two dimensional (2D) formats respectively. Uncountable materials havebeen used to achieve these forms of art; like clay cement, marble, stone,different metals and, fibre ...

  15. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    Science.gov (United States)

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  16. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  17. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....

  18. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  19. High-dimensional quantum cloning and applications to quantum hacking.

    Science.gov (United States)

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim

    2017-02-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.

  20. Analysis of chaos in high-dimensional wind power system.

    Science.gov (United States)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  1. GOTCHA experience report: three-dimensional SAR imaging with complete circular apertures

    Science.gov (United States)

    Ertin, Emre; Austin, Christian D.; Sharma, Samir; Moses, Randolph L.; Potter, Lee C.

    2007-04-01

    We study circular synthetic aperture radar (CSAR) systems collecting radar backscatter measurements over a complete circular aperture of 360 degrees. This study is motivated by the GOTCHA CSAR data collection experiment conducted by the Air Force Research Laboratory (AFRL). Circular SAR provides wide-angle information about the anisotropic reflectivity of the scattering centers in the scene, and also provides three dimensional information about the location of the scattering centers due to a non planar collection geometry. Three dimensional imaging results with single pass circular SAR data reveals that the 3D resolution of the system is poor due to the limited persistence of the reflectors in the scene. We present results on polarimetric processing of CSAR data and illustrate reasoning of three dimensional shape from multi-view layover using prior information about target scattering mechanisms. Next, we discuss processing of multipass (CSAR) data and present volumetric imaging results with IFSAR and three dimensional backprojection techniques on the GOTCHA data set. We observe that the volumetric imaging with GOTCHA data is degraded by aliasing and high sidelobes due to nonlinear flightpaths and sparse and unequal sampling in elevation. We conclude with a model based technique that resolves target features and enhances the volumetric imagery by extrapolating the phase history data using the estimated model.

  2. A hybridized K-means clustering approach for high dimensional ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.

  3. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  4. On Robust Information Extraction from High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science

  5. Inference in High-dimensional Dynamic Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Tang, Haihan

    We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...

  6. Pricing High-Dimensional American Options Using Local Consistency Conditions

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and

  7. Irregular grid methods for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.

    2004-01-01

    This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of

  8. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    Science.gov (United States)

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  9. Genuinely high-dimensional nonlocality optimized by complementary measurements

    International Nuclear Information System (INIS)

    Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung

    2010-01-01

    Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.

  10. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    International Nuclear Information System (INIS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-01-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  11. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Science.gov (United States)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  12. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  13. Quality and efficiency in high dimensional Nearest neighbor search

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2009-01-01

    Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.

  14. Assessment of wall friction model in multi-dimensional component of MARS with air–water cross flow experiment

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jin-Hwa [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Korea Atomic Energy Research Institute, 989-111, Daedeok-daero, Yuseong-gu, Daejeon 305-600 (Korea, Republic of); Choi, Chi-Jin [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Cho, Hyoung-Kyu, E-mail: chohk@snu.ac.kr [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Euh, Dong-Jin [Korea Atomic Energy Research Institute, 989-111, Daedeok-daero, Yuseong-gu, Daejeon 305-600 (Korea, Republic of); Park, Goon-Cherl [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of)

    2017-02-15

    Recently, high precision and high accuracy analysis on multi-dimensional thermal hydraulic phenomena in a nuclear power plant has been considered as state-of-the-art issues. System analysis code, MARS, also adopted a multi-dimensional module to simulate them more accurately. Even though it was applied to represent the multi-dimensional phenomena, but implemented models and correlations in that are one-dimensional empirical ones based on one-dimensional pipe experimental results. Prior to the application of the multi-dimensional simulation tools, however, the constitutive models for a two-phase flow need to be carefully validated, such as the wall friction model. Especially, in a Direct Vessel Injection (DVI) system, the injected emergency core coolant (ECC) on the upper part of the downcomer interacts with the lateral steam flow during the reflood phase in the Large-Break Loss-Of-Coolant-Accident (LBLOCA). The interaction between the falling film and lateral steam flow induces a multi-dimensional two-phase flow. The prediction of ECC flow behavior plays a key role in determining the amount of coolant that can be used as core cooling. Therefore, the wall friction model which is implemented to simulate the multi-dimensional phenomena should be assessed by multidimensional experimental results. In this paper, the air–water cross film flow experiments simulating the multi-dimensional phenomenon in upper part of downcomer as a conceptual problem will be introduced. The two-dimensional local liquid film velocity and thickness data were used as benchmark data for code assessment. And then the previous wall friction model of the MARS-MultiD in the annular flow regime was modified. As a result, the modified MARS-MultiD produced improved calculation result than previous one.

  15. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  16. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...

  17. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  18. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  19. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  20. Online 4-dimensional event reconstruction in the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Akishina, Valentina [Goethe-Universitaet Frankfurt, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Joint Institute for Nuclear Research, Dubna (Russian Federation); Kisel, Ivan [Goethe-Universitaet Frankfurt, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The heavy-ion experiment CBM will focus on the measurement of rare probes at interaction rates up to 10 MHz with data flow of up to 1 TB/s. The free-running data acquisition, delivering a stream of untriggered detector data, requires full event reconstruction and selection to be performed online not only in space, but also in time. The First-Level Event Selection package consists of several modules: track finding, track fitting, short-lived particles finding, event building and event selection. For track reconstruction the Cellular Automaton (CA) method is used, which allows to reconstruct tracks with high efficiency in a time-slice and perform event building. The time-based CA track finder allows to resolve tracks from a time-slice in event-corresponding groups. The algorithm is intrinsically local and the implementation is both vectorized and parallelized between CPU cores. The CA track finder shows a strong scalability on many-core systems. The speed-up factor of 10.6 on a CPU with 10 hyper-threaded physical cores was achieved.

  1. High-dimensional change-point estimation: Combining filtering with convex optimization

    OpenAIRE

    Soh, Yong Sheng; Chandrasekaran, Venkat

    2017-01-01

    We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...

  2. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  3. Manifold learning to interpret JET high-dimensional operational space

    International Nuclear Information System (INIS)

    Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A

    2013-01-01

    In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)

  4. Small angle X-ray scattering experiments with three-dimensional imaging gas detectors

    International Nuclear Information System (INIS)

    La Monaca, A.; Iannuzzi, M.; Messi, R.

    1985-01-01

    Measurements of small angle X-ray scattering of lupolen - R, dry collagen and dry cornea are presented. The experiments have been performed with synchrotron radiation and a new three-dimensional imaging drif-chamber gas detector

  5. Three-dimensional ultrasound. Early personal experience with a dedicated unit and literature review

    International Nuclear Information System (INIS)

    Cesarani, F.; Isolato, G.; Capello, S.; Bianchi, S.D.

    1999-01-01

    The authors report our preliminary clinical experience with three-dimensional ultrasound (3D US) in abdominal and small parts imaging, comparing the yield of 3D versus 2D US and the through a literature review [it

  6. Elucidating high-dimensional cancer hallmark annotation via enriched ontology.

    Science.gov (United States)

    Yan, Shankai; Wong, Ka-Chun

    2017-09-01

    Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    Science.gov (United States)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  8. Analysis of the OPERA-15 two-dimensional voiding experiment using the SAS4A code

    International Nuclear Information System (INIS)

    Briggs, L.L.

    1984-01-01

    Overall, SAS4A appears to do a good job for simulating the OPERA-15 experiment. For most of the experiment parameters, the code calculations compare quite well with the experimental data. The lack of a multi-dimensional voiding model has the effect of extending the flow coastdown time until voiding starts; otherwise, the code simulates the accident progression satisfactorily. These results indicate a need for further work in this area in the form of a tandem analysis by a two-dimensional flow code and a one-dimensional version of that code to confirm the observations derived from the SAS4A analysis

  9. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    International Nuclear Information System (INIS)

    Snyder, Abigail C.; Jiao, Yu

    2010-01-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  10. Three-Dimensional Electromagnetic High Frequency Axisymmetric Cavity Scars.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt

    2014-10-01

    This report examines the localization of high frequency electromagnetic fi elds in three-dimensional axisymmetric cavities along periodic paths between opposing sides of the cavity. The cases where these orbits lead to unstable localized modes are known as scars. This report treats both the case where the opposing sides, or mirrors, are convex, where there are no interior foci, and the case where they are concave, leading to interior foci. The scalar problem is treated fi rst but the approximations required to treat the vector fi eld components are also examined. Particular att ention is focused on the normalization through the electromagnetic energy theorem. Both projections of the fi eld along the scarred orbit as well as point statistics are examined. Statistical comparisons are m ade with a numerical calculation of the scars run with an axisymmetric simulation. This axisymmetric cas eformstheoppositeextreme(wherethetwomirror radii at each end of the ray orbit are equal) from the two -dimensional solution examined previously (where one mirror radius is vastly di ff erent from the other). The enhancement of the fi eldontheorbitaxiscanbe larger here than in the two-dimensional case. Intentionally Left Blank

  11. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  12. The dimensionality of stellar chemical space using spectra from the Apache Point Observatory Galactic Evolution Experiment

    Science.gov (United States)

    Price-Jones, Natalie; Bovy, Jo

    2018-03-01

    Chemical tagging of stars based on their similar compositions can offer new insights about the star formation and dynamical history of the Milky Way. We investigate the feasibility of identifying groups of stars in chemical space by forgoing the use of model derived abundances in favour of direct analysis of spectra. This facilitates the propagation of measurement uncertainties and does not pre-suppose knowledge of which elements are important for distinguishing stars in chemical space. We use ˜16 000 red giant and red clump H-band spectra from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and perform polynomial fits to remove trends not due to abundance-ratio variations. Using expectation maximized principal component analysis, we find principal components with high signal in the wavelength regions most important for distinguishing between stars. Different subsamples of red giant and red clump stars are all consistent with needing about 10 principal components to accurately model the spectra above the level of the measurement uncertainties. The dimensionality of stellar chemical space that can be investigated in the H band is therefore ≲10. For APOGEE observations with typical signal-to-noise ratios of 100, the number of chemical space cells within which stars cannot be distinguished is approximately 1010±2 × (5 ± 2)n - 10 with n the number of principal components. This high dimensionality and the fine-grained sampling of chemical space are a promising first step towards chemical tagging based on spectra alone.

  13. Multi-dimensional reflooding experiments: the PEARL program

    International Nuclear Information System (INIS)

    Stenne, N.; Pradier, M.; Olivieri, J.; Eymery, S.; Fichot, F.; March, P.; Fleurot, J.

    2011-01-01

    PEARL is an experimental program to study heat transfer and flow regime during the reflooding of a severely damaged PWR core where a large part of the core has collapsed and formed a debris bed. PEARL device will consist in a water-steam loop where the key component is an autoclave capable of housing a test section containing the particle bed and its instrumentation made of thermocouples, pressure sensors and flow rate meters. An electromagnetic induction heating system will generate a predefined specific power in the debris bed and maintains the power during the water reflooding phase. A preliminary experimental investigation has been launched with the setting of the PRELUDE facility, which is one-dimensional. The main aim was to test the particle bed heating system and instrumentation during the reflooding phase. PRELUDE results obtained so far show that the chosen technology is able to deposit a sufficient power density during the reflooding phase. Moreover a temperature of 1000 Celsius degrees for the debris bed is reached accurately with the induction system

  14. Hawking radiation of a high-dimensional rotating black hole

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)

    2010-01-15

    We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)

  15. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  16. High-dimensional quantum channel estimation using classical light

    CSIR Research Space (South Africa)

    Mabena, Chemist M

    2017-11-01

    Full Text Available stream_source_info Mabena_20007_2017.pdf.txt stream_content_type text/plain stream_size 960 Content-Encoding UTF-8 stream_name Mabena_20007_2017.pdf.txt Content-Type text/plain; charset=UTF-8 PHYSICAL REVIEW A 96, 053860... (2017) High-dimensional quantum channel estimation using classical light Chemist M. Mabena CSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa and School of Physics, University of the Witwatersrand, Johannesburg 2000, South...

  17. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  18. High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems

    International Nuclear Information System (INIS)

    Wachowiak, M P; Sarlo, B B; Foster, A E Lambe

    2014-01-01

    Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task

  19. Simulations of dimensionally reduced effective theories of high temperature QCD

    CERN Document Server

    Hietanen, Ari

    Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by perf...

  20. High aspect ratio spheromak experiments

    International Nuclear Information System (INIS)

    Robertson, S.; Schmid, P.

    1987-05-01

    The Reversatron RFP (R/a = 50cm/8cm) has been operated as an ohmically heated spheromak of high aspect ratio. We find that the dynamo can drive the toroidal field upward at rates as high as 10 6 G/sec. Discharges can be initiated and ramped upward from seed fields as low as 50 G. Small toroidal bias fields of either polarity (-0.2 < F < 0.2) do not significantly affect operation. 5 refs., 3 figs

  1. High-Dimensional Quantum Information Processing with Linear Optics

    Science.gov (United States)

    Fitzpatrick, Casey A.

    Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for

  2. Two-dimensional computer simulation of high intensity proton beams

    CERN Document Server

    Lapostolle, Pierre M

    1972-01-01

    A computer program has been developed which simulates the two- dimensional transverse behaviour of a proton beam in a focusing channel. The model is represented by an assembly of a few thousand 'superparticles' acted upon by their own self-consistent electric field and an external focusing force. The evolution of the system is computed stepwise in time by successively solving Poisson's equation and Newton's law of motion. Fast Fourier transform techniques are used for speed in the solution of Poisson's equation, while extensive area weighting is utilized for the accurate evaluation of electric field components. A computer experiment has been performed on the CERN CDC 6600 computer to study the nonlinear behaviour of an intense beam in phase space, showing under certain circumstances a filamentation due to space charge and an apparent emittance growth. (14 refs).

  3. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  4. High-dimensional single-cell cancer biology.

    Science.gov (United States)

    Irish, Jonathan M; Doxie, Deon B

    2014-01-01

    Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.

  5. Application of the three-dimensional transport code to analysis of the neutron streaming experiment

    International Nuclear Information System (INIS)

    Chatani, K.; Slater, C.O.

    1990-01-01

    The neutron streaming through an experimental mock-up of a Clinch River Breeder Reactor (CRBR) prototypic coolant pipe chaseway was recalculated with a three-dimensional discrete ordinates code. The experiment was conducted at the Tower Shielding Facility at Oak Ridge National Laboratory in 1976 and 1977. The measurement of the neutron flux, using Bonner ball detectors, indicated nine orders of attenuation in the empty pipeway, which contained two 90-deg bends and was surrounded by concrete walls. The measurement data were originally analyzed using the DOT3.5 two-dimensional discrete ordinates radiation transport code. However, the results did not agree with measurement data at the bend because of the difficulties in modeling the three-dimensional configurations using two-dimensional methods. The two-dimensional calculations used a three-step procedure in which each of the three legs making the two 90-deg bends was a separate calculation. The experiment was recently analyzed with the TORT three-dimensional discrete ordinates radiation transport code, not only to compare the calculational results with the experimental results, but also to compare with results obtained from analyses in Japan using DOT3.5, MORSE, and ENSEMBLE, which is a three-dimensional discrete ordinates radiation transport code developed in Japan

  6. Three-dimensional Simulation of Gas Conductance Measurement Experiments on Alcator C-Mod

    International Nuclear Information System (INIS)

    Stotler, D.P.; LaBombard, B.

    2004-01-01

    Three-dimensional Monte Carlo neutral transport simulations of gas flow through the Alcator C-Mod subdivertor yield conductances comparable to those found in dedicated experiments. All are significantly smaller than the conductance found with the previously used axisymmetric geometry. A benchmarking exercise of the code against known conductance values for gas flow through a simple pipe provides a physical basis for interpreting the comparison of the three-dimensional and experimental C-Mod conductances

  7. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  8. Class prediction for high-dimensional class-imbalanced data

    Directory of Open Access Journals (Sweden)

    Lusa Lara

    2010-10-01

    Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class

  9. Linking experiment and theory for three-dimensional networked binary metal nanoparticle–triblock terpolymer superstructures

    KAUST Repository

    Li, Zihui; Hur, Kahyun; Sai, Hiroaki; Higuchi, Takeshi; Takahara, Atsushi; Jinnai, Hiroshi; Gruner, Sol M.; Wiesner, Ulrich

    2014-01-01

    the intimate coupling of synthesis, in-depth electron tomographic characterization and theory enables exquisite control of superstructure in highly ordered porous three-dimensional continuous networks from single and binary mixtures of metal nanoparticles

  10. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    Science.gov (United States)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  11. Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian

    2011-01-01

    The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...

  12. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  13. Quantum correlation of high dimensional system in a dephasing environment

    Science.gov (United States)

    Ji, Yinghua; Ke, Qiang; Hu, Juju

    2018-05-01

    For a high dimensional spin-S system embedded in a dephasing environment, we theoretically analyze the time evolutions of quantum correlation and entanglement via Frobenius norm and negativity. The quantum correlation dynamics can be considered as a function of the decoherence parameters, including the ratio between the system oscillator frequency ω0 and the reservoir cutoff frequency ωc , and the different environment temperature. It is shown that the quantum correlation can not only measure nonclassical correlation of the considered system, but also perform a better robustness against the dissipation. In addition, the decoherence presents the non-Markovian features and the quantum correlation freeze phenomenon. The former is much weaker than that in the sub-Ohmic or Ohmic thermal reservoir environment.

  14. Evaluating Clustering in Subspace Projections of High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Günnemann, Stephan; Assent, Ira

    2009-01-01

    Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...

  15. Statistical mechanics of complex neural systems and high dimensional data

    International Nuclear Information System (INIS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-01-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. (paper)

  16. Dimensional measurement of micro parts with high aspect ratio in HIT-UOI

    Science.gov (United States)

    Dang, Hong; Cui, Jiwen; Feng, Kunpeng; Li, Junying; Zhao, Shiyuan; Zhang, Haoran; Tan, Jiubin

    2016-11-01

    Micro parts with high aspect ratios have been widely used in different fields including aerospace and defense industries, while the dimensional measurement of these micro parts becomes a challenge in the field of precision measurement and instrument. To deal with this contradiction, several probes for the micro parts precision measurement have been proposed by researchers in Center of Ultra-precision Optoelectronic Instrument (UOI), Harbin Institute of Technology (HIT). In this paper, optical fiber probes with structures of spherical coupling(SC) with double optical fibers, micro focal-length collimation (MFL-collimation) and fiber Bragg grating (FBG) are described in detail. After introducing the sensing principles, both advantages and disadvantages of these probes are analyzed respectively. In order to improve the performances of these probes, several approaches are proposed. A two-dimensional orthogonal path arrangement is propounded to enhance the dimensional measurement ability of MFL-collimation probes, while a high resolution and response speed interrogation method based on differential method is used to improve the accuracy and dynamic characteristics of the FBG probes. The experiments for these special structural fiber probes are given with a focus on the characteristics of these probes, and engineering applications will also be presented to prove the availability of them. In order to improve the accuracy and the instantaneity of the engineering applications, several techniques are used in probe integration. The effectiveness of these fiber probes were therefore verified through both the analysis and experiments.

  17. Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning

    Science.gov (United States)

    Sagun, Levent

    This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would

  18. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  19. Two dimensional simulation of high power laser-surface interaction

    International Nuclear Information System (INIS)

    Goldman, S.R.; Wilke, M.D.; Green, R.E.L.; Johnson, R.P.; Busch, G.E.

    1998-01-01

    For laser intensities in the range of 10 8 --10 9 W/cm 2 , and pulse lengths of order 10 microsec or longer, the authors have modified the inertial confinement fusion code Lasnex to simulate gaseous and some dense material aspects of the laser-matter interaction. The unique aspect of their treatment consists of an ablation model which defines a dense material-vapor interface and then calculates the mass flow across this interface. The model treats the dense material as a rigid two-dimensional mass and heat reservoir suppressing all hydrodynamic motion in the dense material. The computer simulations and additional post-processors provide predictions for measurements including impulse given to the target, pressures at the target interface, electron temperatures and densities in the vapor-plasma plume region, and emission of radiation from the target. The authors will present an analysis of some relatively well diagnosed experiments which have been useful in developing their modeling. The simulations match experimentally obtained target impulses, pressures at the target surface inside the laser spot, and radiation emission from the target to within about 20%. Hence their simulational technique appears to form a useful basis for further investigation of laser-surface interaction in this intensity, pulse-width range. This work is useful in many technical areas such as materials processing

  20. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus

    2013-11-12

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  1. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus; Dahmen, Wolfgang; DeVore, Ronald; Grasedyck, Lars

    2013-01-01

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called 'curse of dimensionality'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  2. Numerical analysis of biological clogging in two-dimensional sand box experiments

    DEFF Research Database (Denmark)

    Kildsgaard, J.; Engesgaard, Peter Knudegaard

    2001-01-01

    Two-dimensional models for biological clogging and sorptive tracer transport were used to study the progress of clogging in a sand box experiment. The sand box had been inoculated with a strip of bacteria and exposed to a continuous injection of nitrate and acetate. Brilliant Blue was regularly...... injected during the clogging experiment and digital images of the tracer movement had been converted to concentration maps using an image analysis. The calibration of the models to the Brilliant Blue observations shows that Brilliant Blue has a solid biomass dependent sorption that is not compliant...... with the assumed linear constant Kd behaviour. It is demonstrated that the dimensionality of sand box experiments in comparison to column experiments results in a much lower reduction in hydraulic conductivity Žfactor of 100. and that the bulk hydraulic conductivity of the sand box decreased only slightly. However...

  3. Characterization of 3-dimensional superconductive thin film components for gravitational experiments in space

    Energy Technology Data Exchange (ETDEWEB)

    Hechler, S.; Nawrodt, R.; Nietzsche, S.; Vodel, W.; Seidel, P. [Friedrich-Schiller-Univ. Jena (Germany). Inst. fuer Festkoerperphysik; Dittus, H. [ZARM, Univ. Bremen (Germany); Loeffler, F. [Physikalisch-Technische Bundesanstalt, Braunschweig (Germany)

    2007-07-01

    Superconducting quantum interference devices (SQUIDs) are used for high precise gravitational experiments. One of the most impressive experiments is the satellite test of the equivalence principle (STEP) of NASA/ESA. The STEP mission aims to prove a possible violation of Einstein's equivalence principle at an extreme level of accuracy of 1 part in 10{sup 18} in space. In this contribution we present an automatically working measurement equipment to characterize 3-dimensional superconducting thin film components like i.e. pick-up coils and test masses for STEP. The characterization is done by measurements of the transition temperature between the normal and the superconducting state using a special built anti-cryostat. Above all the setup was designed for use in normal LHe transport Dewars. The sample chamber has a volume of 150 cm{sup 3} and can be fully temperature controlled over a range from 4.2 K to 300 K with a resolution of better then 100 mK. (orig.)

  4. An Eoetvoes versus a Galileo experiment: A study in two versus three-dimensional physics

    International Nuclear Information System (INIS)

    Hughes, R.J.; Nieto, M.M.; Goldman, T.

    1988-01-01

    We show how the net effect of two new approximately cancelling (vector and scalar) gravitational forces could produce a measureable effect from a horizontal thin slab in an Eoetvoes experiment, yet yield a null result at the same level for a Galileo experiment. The resolution is an example of two-versus three-dimensional physics and the cancelling nature of the two forces. Using two different earth models, we apply this result to the Australian mine gravity data of Stacey et al., the Brookhaven Eoetvoes experiment of Thieberger, and the Colorado Galileo experiment of Niebauer et al. (orig.)

  5. Three-dimensional modelling of an injection experiment in the anaerobic part of a landfill plume

    DEFF Research Database (Denmark)

    Juul Petersen, Michael; Engesgaard, Peter Knudegaard; Bjerg, Poul Løgstrup

    1998-01-01

    Analytical and numerical three-dimensional (3-D) simulations have been conducted and compared to data obtained from a large-scale (50 m), natural gradient field injection experiment. Eighteen different xenobiotic compounds (i.e. benzene, toluene, o-xylene, naphthalene, 1,1,1-TCA, PCE, and TCE...

  6. A qualitative numerical study of high dimensional dynamical systems

    Science.gov (United States)

    Albers, David James

    Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional

  7. Progress in high-dimensional percolation and random graphs

    CERN Document Server

    Heydenreich, Markus

    2017-01-01

    This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic.  The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation.  Part III, consist...

  8. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  9. Comparison of electron cloud simulation and experiments in the high-current experiment

    International Nuclear Information System (INIS)

    Cohen, R.H.; Friedman, A.; Covo, M. Kireeff; Lund, S.M.; Molvik, A.W.; Bieniosek, F.M.; Seidl, P.A.; Vay, J.-L.; Verboncoeur, J.; Stoltz, P.; Veitzer, S.

    2004-01-01

    A set of experiments has been performed on the High-Current Experiment (HCX) facility at LBNL, in which the ion beam is allowed to collide with an end plate and thereby induce a copious supply of desorbed electrons. Through the use of combinations of biased and grounded electrodes positioned in between and downstream of the quadrupole magnets, the flow of electrons upstream into the magnets can be turned on or off. Properties of the resultant ion beam are measured under each condition. The experiment is modeled via a full three-dimensional, two species (electron and ion) particle simulation, as well as via reduced simulations (ions with appropriately chosen model electron cloud distributions, and a high-resolution simulation of the region adjacent to the end plate). The three-dimensional simulations are the first of their kind and the first to make use of a timestep-acceleration scheme that allows the electrons to be advanced with a timestep that is not small compared to the highest electron cyclotron period. The simulations reproduce qualitative aspects of the experiments, illustrate some unanticipated physical effects, and serve as an important demonstration of a developing simulation capability

  10. On the Zeeman Effect in highly excited atoms: 2. Three-dimensional case

    International Nuclear Information System (INIS)

    Baseia, B.; Medeiros e Silva Filho, J.

    1984-01-01

    A previous result, found in two-dimensional hydrogen-atoms, is extended to the three-dimensional case. A mapping of a four-dimensional space R 4 onto R 3 , that establishes an equivalence between Coulomb and harmonic potentials, is used to show that the exact solution of the Zeeman effect in highly excited atoms, cannot be reached. (Author) [pt

  11. Five-dimensional Myers-Perry black holes cannot be overspun in gedanken experiments

    Science.gov (United States)

    An, Jincheng; Shan, Jieru; Zhang, Hongbao; Zhao, Suting

    2018-05-01

    We apply the new version of a gedanken experiment designed recently by Sorce and Wald to overspin the five-dimensional Myers-Perry black holes. As a result, the extremal black holes cannot be overspun at the linear order. On the other hand, although the nearly extremal black holes could be overspun at the linear order, this process is shown to be prohibited by the quadratic order correction. Thus, no violation of the weak cosmic censorship conjecture occurs around the five-dimensional Myers-Perry black holes.

  12. Characterization of highly anisotropic three-dimensionally nanostructured surfaces

    International Nuclear Information System (INIS)

    Schmidt, Daniel

    2014-01-01

    Generalized ellipsometry, a non-destructive optical characterization technique, is employed to determine geometrical structure parameters and anisotropic dielectric properties of highly spatially coherent three-dimensionally nanostructured thin films grown by glancing angle deposition. The (piecewise) homogeneous biaxial layer model approach is discussed, which can be universally applied to model the optical response of sculptured thin films with different geometries and from diverse materials, and structural parameters as well as effective optical properties of the nanostructured thin films are obtained. Alternative model approaches for slanted columnar thin films, anisotropic effective medium approximations based on the Bruggeman formalism, are presented, which deliver results comparable to the homogeneous biaxial layer approach and in addition provide film constituent volume fraction parameters as well as depolarization or shape factors. Advantages of these ellipsometry models are discussed on the example of metal slanted columnar thin films, which have been conformally coated with a thin passivating oxide layer by atomic layer deposition. Furthermore, the application of an effective medium approximation approach to in-situ growth monitoring of this anisotropic thin film functionalization process is presented. It was found that structural parameters determined with the presented optical model equivalents for slanted columnar thin films agree very well with scanning electron microscope image estimates. - Highlights: • Summary of optical model strategies for sculptured thin films with arbitrary geometries • Application of the rigorous anisotropic Bruggeman effective medium applications • In-situ growth monitoring of atomic layer deposition on biaxial metal slanted columnar thin film

  13. Effects of dependence in high-dimensional multiple testing problems

    Directory of Open Access Journals (Sweden)

    van de Wiel Mark A

    2008-02-01

    Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.

  14. Microfluidic engineered high cell density three-dimensional neural cultures

    Science.gov (United States)

    Cullen, D. Kacy; Vukasinovic, Jelena; Glezer, Ari; La Placa, Michelle C.

    2007-06-01

    Three-dimensional (3D) neural cultures with cells distributed throughout a thick, bioactive protein scaffold may better represent neurobiological phenomena than planar correlates lacking matrix support. Neural cells in vivo interact within a complex, multicellular environment with tightly coupled 3D cell-cell/cell-matrix interactions; however, thick 3D neural cultures at cell densities approaching that of brain rapidly decay, presumably due to diffusion limited interstitial mass transport. To address this issue, we have developed a novel perfusion platform that utilizes forced intercellular convection to enhance mass transport. First, we demonstrated that in thick (>500 µm) 3D neural cultures supported by passive diffusion, cell densities =104 cells mm-3), continuous medium perfusion at 2.0-11.0 µL min-1 improved viability compared to non-perfused cultures (p death and matrix degradation. In perfused cultures, survival was dependent on proximity to the perfusion source at 2.00-6.25 µL min-1 (p 90% viability in both neuronal cultures and neuronal-astrocytic co-cultures. This work demonstrates the utility of forced interstitial convection in improving the survival of high cell density 3D engineered neural constructs and may aid in the development of novel tissue-engineered systems reconstituting 3D cell-cell/cell-matrix interactions.

  15. Inference for High-dimensional Differential Correlation Matrices.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  16. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  17. The literary uses of high-dimensional space

    Directory of Open Access Journals (Sweden)

    Ted Underwood

    2015-12-01

    Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.

  18. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  19. CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Malgorzata Nowicka

    2017-05-01

    Full Text Available High dimensional mass and flow cytometry (HDCyto experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots, reporting of clustering results (dimensionality reduction, heatmaps with dendrograms and differential analyses (e.g. plots of aggregated signals.

  20. TSAR: a program for automatic resonance assignment using 2D cross-sections of high dimensionality, high-resolution spectra

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor [University of Warsaw, Faculty of Chemistry (Poland); Billeter, Martin, E-mail: martin.billeter@chem.gu.se [University of Gothenburg, Biophysics Group, Department of Chemistry and Molecular Biology (Sweden)

    2012-09-15

    While NMR studies of proteins typically aim at structure, dynamics or interactions, resonance assignments represent in almost all cases the initial step of the analysis. With increasing complexity of the NMR spectra, for example due to decreasing extent of ordered structure, this task often becomes both difficult and time-consuming, and the recording of high-dimensional data with high-resolution may be essential. Random sampling of the evolution time space, combined with sparse multidimensional Fourier transform (SMFT), allows for efficient recording of very high dimensional spectra ({>=}4 dimensions) while maintaining high resolution. However, the nature of this data demands for automation of the assignment process. Here we present the program TSAR (Tool for SMFT-based Assignment of Resonances), which exploits all advantages of SMFT input. Moreover, its flexibility allows to process data from any type of experiments that provide sequential connectivities. The algorithm was tested on several protein samples, including a disordered 81-residue fragment of the {delta} subunit of RNA polymerase from Bacillus subtilis containing various repetitive sequences. For our test examples, TSAR achieves a high percentage of assigned residues without any erroneous assignments.

  1. Space experiments with high stability clocks

    International Nuclear Information System (INIS)

    Vessot, R.F.C.

    1993-01-01

    Modern metrology depends increasingly on the accuracy and frequency stability of atomic clocks. Applications of such high-stability oscillators (or clocks) to experiments performed in space are described and estimates of the precision of these experiments are made in terms of clock performance. Methods using time-correlation to cancel localized disturbances in very long signal paths and a proposed space borne four station VLBI system are described. (TEC). 30 refs., 14 figs., 1 tab

  2. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    Science.gov (United States)

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  3. Three-Dimensional Neutral Transport Simulations of Gas Puff Imaging Experiments

    International Nuclear Information System (INIS)

    Stotler, D.P.; DIppolito, D.A.; LeBlanc, B.; Maqueda, R.J.; Myra, J.R.; Sabbagh, S.A.; Zweben, S.J.

    2003-01-01

    Gas Puff Imaging (GPI) experiments are designed to isolate the structure of plasma turbulence in the plane perpendicular to the magnetic field. Three-dimensional aspects of this diagnostic technique as used on the National Spherical Torus eXperiment (NSTX) are examined via Monte Carlo neutral transport simulations. The radial width of the simulated GPI images are in rough agreement with observations. However, the simulated emission clouds are angled approximately 15 degrees with respect to the experimental images. The simulations indicate that the finite extent of the gas puff along the viewing direction does not significantly degrade the radial resolution of the diagnostic. These simulations also yield effective neutral density data that can be used in an approximate attempt to infer two-dimensional electron density and temperature profiles from the experimental images

  4. Three-dimensional laparoscopy vs 2-dimensional laparoscopy with high-definition technology for abdominal surgery

    DEFF Research Database (Denmark)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian

    2017-01-01

    BACKGROUND: This systematic review investigates newer generation 3-dimensional (3D) laparoscopy vs 2-dimensional (2D) laparoscopy in terms of error rating, performance time, and subjective assessment as early comparisons have shown contradictory results due to technological shortcomings. DATA...... Central Register of Controlled Trials database. CONCLUSIONS: Of 643 articles, 13 RCTs were included, of which 2 were clinical trials. Nine of 13 trials (69%) and 10 of 13 trials (77%) found a significant reduction in performance time and error, respectively, with the use of 3D-laparoscopy. Overall, 3D......-laparoscopy was found to be superior or equal to 2D-laparoscopy. All trials featuring subjective evaluation found a superiority of 3D-laparoscopy. More clinical RCTs are still awaited for the convincing results to be reproduced....

  5. On the sensitivity of dimensional stability of high density polyethylene on heating rate

    Directory of Open Access Journals (Sweden)

    2007-02-01

    Full Text Available Although high density polyethylene (HDPE is one of the most widely used industrial polymers, its application compared to its potential has been limited because of its low dimensional stability particularly at high temperature. Dilatometry test is considered as a method for examining thermal dimensional stability (TDS of the material. In spite of the importance of simulation of TDS of HDPE during dilatometry test it has not been paid attention by other investigators. Thus the main goal of this research is concentrated on simulation of TDS of HDPE. Also it has been tried to validate the simulation results and practical experiments. For this purpose the standard dilatometry test was done on the HDPE speci­mens. Secant coefficient of linear thermal expansion was computed from the test. Then by considering boundary conditions and material properties, dilatometry test has been simulated at different heating rates and the thermal strain versus temper­ature was calculated. The results showed that the simulation results and practical experiments were very close together.

  6. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  7. Experiments on melting in classical and quantum two dimensional electron systems

    International Nuclear Information System (INIS)

    Williams, F.I.B.

    1991-01-01

    ''Two dimensional electron system'' (2DES) here refers to electrons whose dynamics is free in 2 dimensions but blocked in the third. Experiments have been performed in two limiting situations: the classical, low density, limit realised by electrons deposited on a liquid helium surface and the quantum, high density, limit realised by electrons at an interface between two epitaxially matched semiconductors. In the classical system, where T Q c so that the thermodynamic state is determined by the competition between the temperature and the Coulomb interaction, melting is induced either by raising the temperature at constant density or by lowering the density at finite temperature. In the quantum system, it is not possible to lower the density below about 100n W without the Coulomb interaction losing out to the random field representing the extrinsic disorder imposed by the semiconductor host. Instead one has to induce crystallisation with the help of the Lorentz force, by applying a perpendicular magnetic field B [2] . As the quantum magnetic length l c = (Planck constant c/eB) 1/2 is reduced with respect to the interelectronic spacing a, expressed by the filling factor ν 2l c 2 /a 2 , the system exhibits the quantum Hall effect (QHE), first for integer then for fractional values of ν. The fractional quantum Hall effect (FQHE) is a result of Coulomb induced correlation in the quantum liquid, but as ν is decreased still further the correlations are expected to take on long-range crystal-like periodicity accompanied by elastic shear rigidity. Such a state can nonetheless be destroyed by the disordering effect of temperature, giving rise to a phase boundary in a (T, B) plane. The aim of experiment is first to determine the phase diagram and then to help elucidate the mechanism of the melting. (author)

  8. Experiments with three-dimensional riblets as an idealized model of shark skin

    Energy Technology Data Exchange (ETDEWEB)

    Bechert, D.W.; Bruse, M.; Hage, W. [DLR Deutsches Zentrum fuer Luft- und Raumfahrt e.V., Berlin (Germany). Dept. of Turbulence Res.

    2000-05-01

    The skin of fast sharks exhibits a rather intriguing three-dimensional rib pattern. Therefore, the question arises whether or not such three-dimensional riblet surfaces may produce an equivalent or even higher drag reduction than straight two-dimensional riblets. Previously, the latter have been shown to reduce turbulent wall shear stress by up to 10%. Hence, the drag reduction by three-dimensional riblet surfaces is investigated experimentally. Our idealized 3D-surface consists of sharp-edged fin-shaped elements arranged in an interlocking array. The turbulent wall shear stress on this surface is measured using direct force balances. In a first attempt, wind tunnel experiments with about 365000 tiny fin elements per test surface have been carried out. Due to the complexity of the surface manufacturing process, a comprehensive parametric study was not possible. These initial wind tunnel data, however, hinted at an appreciable drag reduction. Subsequently, in order to have a better judgement on the potential of these 3D-surfaces, oil channel experiments are carried out. In our new oil channel, the geometrical dimensions of the fins can be magnified 10 times in size as compared to the initial wind tunnel experiments, i.e., from typically 0.5 mm to 5 mm. For these latter oil channel experiments, novel test plates with variable fin configuration have been manufactured, with 1920-4000 fins. This enhanced variability permits measurements with a comparatively large parameter range. As a result of our measurements, it can be concluded, that 3D-riblet surfaces do indeed produce an appreciable drag reduction. We found as much as 7.3% decreased turbulent shear stress, as compared to a smooth reference plate.

  9. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and "hidden" dimensions.

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.

  10. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  11. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  12. Dimensionality analysis of multiparticle production at high energies

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1989-01-01

    An algorithm of analysis of multiparticle final states is offered. By the Renyi dimensionalities, which were calculated according to experimental data, though it were hadron distribution over the rapidity intervals or particle distribution in an N-dimensional momentum space, we can judge about the degree of correlation of particles, separate the momentum space projections and areas where the probability measure singularities are observed. The method is tested in a series of calculations with samples of fractal object points and with samples obtained by means of different generators of pseudo- and quasi-random numbers. 27 refs.; 11 figs

  13. Problems of high temperature superconductivity in three-dimensional systems

    Energy Technology Data Exchange (ETDEWEB)

    Geilikman, B T

    1973-01-01

    A review is given of more recent papers on this subject. These papers have dealt mainly with two-dimensional systems. The present paper extends the treatment to three-dimensional systems, under the following headings: systems with collective electrons of one group and localized electrons of another group (compounds of metals with non-metals-dielectrics, organic substances, undoped semiconductors, molecular crystals); experimental investigations of superconducting compounds of metals with organic compounds, dielectrics, semiconductors, and semi-metals; and systems with two or more groups of collective electrons. Mechanics are considered and models are derived. 86 references.

  14. Two-dimensional cross-section sensitivity and uncertainty analysis of the LBM experience at LOTUS

    International Nuclear Information System (INIS)

    Davidson, J.W.; Dudziak, D.J.; Pelloni, S.; Stepanek, J.

    1989-01-01

    In recent years, the LOTUS fusion blanket facility at IGA-EPF in Lausanne provided a series of irradiation experiments with the Lithium Blanket Module (LBM). The LBM has both realistic fusion blanket and materials and configuration. It is approximately an 80-cm cube, and the breeding material is Li 2 . Using as the D-T neutron source the Haefely Neutron Generator (HNG) with an intensity of about 5·10 12 n/s, a series of experiments with the bare LBM as well as with the LBM preceded by Pb, Be and ThO 2 multipliers were carried out. In a recent common Los Alamos/PSI effort, a sensitivity and nuclear data uncertainty path for the modular code system AARE (Advanced Analysis for Reactor Engineering) was developed. This path includes the cross-section code TRAMIX, the one-dimensional finite difference S n -transport code ONEDANT, the two-dimensional finite element S n -transport code TRISM, and the one- and two-dimensional sensitivity and nuclear data uncertainty code SENSIBL. For the nucleonic transport calculations, three 187-neutron-group libraries are presently available: MATXS8A and MATXS8F based on ENDF/B-V evaluations and MAT187 based on JEF/EFF evaluations. COVFILS-2, a 74-group library of neutron cross-sections, scattering matrices and covariances, is the data source for SENSIBL; the 74-group structure of COVFILS-2 is a subset of the Los Alamos 187-group structure. Within the framework of the present work a complete set of forward and adjoint two-dimensional TRISM calculations were performed both for the bare, as well as for the Pb- and Be-preceded, LBM using MATXS8 libraries. Then a two-dimensional sensitivity and uncertainty analysis for all cases was performed

  15. One dimensional two-body collisions experiment based on LabVIEW interface with Arduino

    Science.gov (United States)

    Saphet, Parinya; Tong-on, Anusorn; Thepnurat, Meechai

    2017-09-01

    The purpose of this work is to build a physics lab apparatus that is modern, low-cost and simple. In one dimensional two-body collisions experiment, we used the Arduino UNO R3 as a data acquisition system which was controlled by LabVIEW program. The photogate sensors were designed using LED and LDR to measure position as a function of the time. Aluminium frame houseware and blower were used for the air track system. In both totally inelastic and elastic collision experiments, the results of momentum and energy conservation are in good agreement with the theoretical calculations.

  16. AN EFFECTIVE MULTI-CLUSTERING ANONYMIZATION APPROACH USING DISCRETE COMPONENT TASK FOR NON-BINARY HIGH DIMENSIONAL DATA SPACES

    Directory of Open Access Journals (Sweden)

    L.V. Arun Shalin

    2016-01-01

    Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization

  17. High-power laser experiments to study collisionless shock generation

    Directory of Open Access Journals (Sweden)

    Sakawa Y.

    2013-11-01

    Full Text Available A collisionless Weibel-instability mediated shock in a self-generated magnetic field is studied using two-dimensional particle-in-cell simulation [Kato and Takabe, Astophys. J. Lett. 681, L93 (2008]. It is predicted that the generation of the Weibel shock requires to use NIF-class high-power laser system. Collisionless electrostatic shocks are produced in counter-streaming plasmas using Gekko XII laser system [Kuramitsu et al., Phys. Rev. Lett. 106, 175002 (2011]. A NIF facility time proposal is approved to study the formation of the collisionless Weibel shock. OMEGA and OMEGA EP experiments have been started to study the plasma conditions of counter-streaming plasmas required for the NIF experiment using Thomson scattering and to develop proton radiography diagnostics.

  18. A high-power target experiment

    CERN Document Server

    Kirk, H G; Ludewig, H; Palmer, Robert; Samulyak, V; Simos, N; Tsang, Thomas; Bradshaw, T W; Drumm, Paul V; Edgecock, T R; Ivanyushenkov, Yury; Bennett, Roger; Efthymiopoulos, Ilias; Fabich, Adrian; Haseroth, H; Haug, F; Lettry, Jacques; Hayato, Y; Yoshimura, Koji; Gabriel, Tony A; Graves, Van; Spampinato, P; Haines, John; McDonald, Kirk T

    2005-01-01

    We describe an experiment designed as a proof-of-principle test for a target system capable of converting a 4 MW proton beam into a high-intensity muon beam suitable for incorporation into either a neutrino factory complex or a muon collider. The target system is based on exposing a free mercury jet to an intense proton beam in the presence of a high strength solenoidal field.

  19. Phonons in a one-dimensional Yukawa chain: Dusty plasma experiment and model

    International Nuclear Information System (INIS)

    Liu Bin; Goree, J.

    2005-01-01

    Phonons in a one-dimensional chain of charged microspheres suspended in a plasma were studied in an experiment. The phonons correspond to random particle motion in the chain; no external manipulation was applied to excite the phonons. Two modes were observed, longitudinal and transverse. The velocity fluctuations in the experiment are analyzed using current autocorrelation functions and a phonon spectrum. The phonon energy was found to be unequally partitioned among phonon modes in the dusty plasma experiment. The experimental phonon spectrum was characterized by a dispersion relation that was found to differ from the dispersion relation for externally excited phonons. This difference is attributed to the presence of frictional damping due to gas, which affects the propagation of externally excited phonons differently from phonons that correspond to random particle motion. A model is developed and fit to the experiment to explain the features of the autocorrelation function, phonon spectrum, and the dispersion relation

  20. Laboratory setup and results of experiments on two-dimensional multiphase flow in porous media

    International Nuclear Information System (INIS)

    McBride, J.F.; Graham, D.N.

    1990-10-01

    In the event of an accidental release into earth's subsurface of an immiscible organic liquid, such as a petroleum hydrocarbon or chlorinated organic solvent, the spatial and temporal distribution of the organic liquid is of great interest when considering efforts to prevent groundwater contamination or restore contaminated groundwater. An accurate prediction of immiscible organic liquid migration requires the incorporation of relevant physical principles in models of multiphase flow in porous media; these physical principles must be determined from physical experiments. This report presents a series of such experiments performed during the 1970s at the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland. The experiments were designed to study the transient, two-dimensional displacement of three immiscible fluids in a porous medium. This experimental study appears to be the most detailed published to date. The data obtained from these experiments are suitable for the validation and test calibration of multiphase flow codes. 73 refs., 140 figs

  1. High density implosion experiments at Nova

    International Nuclear Information System (INIS)

    Cable, M.D.; Hatchett, S.P.; Nelson, M.B.; Lerche, R.A.; Murphy, T.J.; Ress, D.B.

    1994-01-01

    Deuterium filled glass microballoons are used as indirectly driven targets for implosion experiments at the Nova Laser Fusion Facility. High levels of laser precision were required to achieve fuel densities and convergences to an ignition scale hot spot. (AIP) copyright 1994 American Institute of Physics

  2. Matrix correlations for high-dimensional data: The modified RV-coefficient

    NARCIS (Netherlands)

    Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van

    2009-01-01

    Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they

  3. Dimensional consistency achieved in high-performance synchronizing hubs

    International Nuclear Information System (INIS)

    Garcia, P.; Campos, M.; Torralba, M.

    2013-01-01

    The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via ( 2 P2S ) has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and process. (Author) 21 refs.

  4. The Figured Worlds of High School Science Teachers: Uncovering Three-Dimensional Assessment Decisions

    Science.gov (United States)

    Ewald, Megan

    As a result of recent mandates of the Next Generation Science Standards, assessments are a "system of meaning" amidst a paradigm shift toward three-dimensional assessments. This study is motivated by two research questions: 1) how do high school science teachers describe their processes of decision-making in the development and use of three-dimensional assessments and 2) how do high school science teachers negotiate their identities as assessors in designing three-dimensional assessments. An important factor in teachers' assessment decision making is how they identify themselves as assessors. Therefore, this study investigated the teachers' roles as assessors through the Sociocultural Identity Theory. The most important contribution from this study is the emergent teacher assessment sub-identities: the modifier-recycler , the feeler-finder, and the creator. Using a qualitative phenomenological research design, focus groups, three-series interviews, think-alouds, and document analysis were utilized in this study. These qualitative methods were chosen to elicit rich conversations among teachers, make meaning of the teachers' experiences through in-depth interviews, amplify the thought processes of individual teachers while making assessment decisions, and analyze assessment documents in relation to teachers' perspectives. The findings from this study suggest that--of the 19 participants--only two teachers could consistently be identified as creators and aligned their assessment practices with NGSS. However, assessment sub-identities are not static and teachers may negotiate their identities from one moment to the next within socially constructed realms of interpretation known as figured worlds. Because teachers are positioned in less powerful figured worlds within the dominant discourse of standardization, this study raises awareness as to how the external pressures from more powerful figured worlds socially construct teachers' identities as assessors. For teachers

  5. Two-dimensional impurity transport calculations for a high recycling divertor

    International Nuclear Information System (INIS)

    Brooks, J.N.

    1986-04-01

    Two dimensional analysis of impurity transport in a high recycling divertor shows asymmetric particle fluxes to the divertor plate, low helium pumping efficiency, and high scrapeoff zone shielding for sputtered impurities

  6. Dimensional consistency achieved in high-performance synchronizing hubs

    Directory of Open Access Journals (Sweden)

    García, P.

    2013-02-01

    Full Text Available The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via (“2P2S” has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and process.

    Las tolerancias en componentes fabricados para la industria del automóvil son tan estrechas que cualquier modificación en las variables del proceso puede provocar que no se cumplan. Una disminución de las tolerancias dimensionales, puede significar una mejora en las propiedades de las piezas. Dependiendo de los requerimientos dimensionales y del material, distintas rutas de procesado pueden seguirse para encontrar un método de procesado robusto, que minimice costes y maximice la capacidad del proceso. En los últimos años, la tolerancia dimensional se ha ajustado gracias a métodos de procesado como el doble prensado/doble sinterizado (“2P2S”, método de gran precisión para conseguir estrechas tolerancias. En este trabajo, se muestra que los parámetros de procesado

  7. Modeling a High Explosive Cylinder Experiment

    Science.gov (United States)

    Zocher, Marvin A.

    2017-06-01

    Cylindrical assemblies constructed from high explosives encased in an inert confining material are often used in experiments aimed at calibrating and validating continuum level models for the so-called equation of state (constitutive model for the spherical part of the Cauchy tensor). Such is the case in the work to be discussed here. In particular, work will be described involving the modeling of a series of experiments involving PBX-9501 encased in a copper cylinder. The objective of the work is to test and perhaps refine a set of phenomenological parameters for the Wescott-Stewart-Davis reactive burn model. The focus of this talk will be on modeling the experiments, which turned out to be non-trivial. The modeling is conducted using ALE methodology.

  8. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  9. Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations

    Science.gov (United States)

    Garrett, Karen A.; Allison, David B.

    2015-01-01

    Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106

  10. Challenges and approaches to statistical design and inference in high-dimensional investigations.

    Science.gov (United States)

    Gadbury, Gary L; Garrett, Karen A; Allison, David B

    2009-01-01

    Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.

  11. Compilation of current high energy physics experiments

    International Nuclear Information System (INIS)

    1978-09-01

    This compilation of current high-energy physics experiments is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and the nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. Nominally, the compilation includes summaries of all high-energy physics experiments at the above laboratories that were approved (and not subsequently withdrawn) before about June 1978, and had not completed taking of data by 1 January 1975. The experimental summaries are supplemented with three indexes to the compilation, several vocabulary lists giving names or abbreviations used, and a short summary of the beams at each of the laboratories (except Rutherford). The summaries themselves are included on microfiche

  12. High energy collisions of nuclei: experiments

    International Nuclear Information System (INIS)

    Heckman, H.H.

    1977-09-01

    Heavy-ion nuclear reactions with projectile energies up to 2.1 GeV/A are reviewed. The concept of ''rapidity'' is elucidated, and the reactions discussed are divided into sections dealing with target fragmentation, projectile fragmentation, and the intermediate region, with emphasis on the production of light nuclei in high-energy heavy-ion collisions. Target fragmentation experiments using nuclear emulsion and AgCl visual track detectors are also summarized. 18 figures

  13. Particle physics experiments at high energy colliders

    International Nuclear Information System (INIS)

    Hauptman, John

    2011-01-01

    Written by one of the detector developers for the International Linear Collider, this is the first textbook for graduate students dedicated to the complexities and the simplicities of high energy collider detectors. It is intended as a specialized reference for a standard course in particle physics, and as a principal text for a special topics course focused on large collider experiments. Equally useful as a general guide for physicists designing big detectors. (orig.)

  14. Triggers for a high sensitivity charm experiment

    International Nuclear Information System (INIS)

    Christian, D.C.

    1994-07-01

    Any future charm experiment clearly should implement an E T trigger and a μ trigger. In order to reach the 10 8 reconstructed charm level for hadronic final states, a high quality vertex trigger will almost certainly also be necessary. The best hope for the development of an offline quality vertex trigger lies in further development of the ideas of data-driven processing pioneered by the Nevis/U. Mass. group

  15. Normalization of High Dimensional Genomics Data Where the Distribution of the Altered Variables Is Skewed

    Science.gov (United States)

    Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per

    2011-01-01

    Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher

  16. Interface between path and orbital angular momentum entanglement for high-dimensional photonic quantum information.

    Science.gov (United States)

    Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton

    2014-07-30

    Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.

  17. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models.

    Directory of Open Access Journals (Sweden)

    Cemal Cagatay Bilgin

    Full Text Available BioSig3D is a computational platform for high-content screening of three-dimensional (3D cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i morphogenesis of a panel of human mammary epithelial cell lines (HMEC, and (ii heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation.

  18. Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids

    NARCIS (Netherlands)

    bin Zubair, H.; Oosterlee, C.E.; Wienands, R.

    2006-01-01

    This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We

  19. HDclassif : An R Package for Model-Based Clustering and Discriminant Analysis of High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Laurent Berge

    2012-01-01

    Full Text Available This paper presents the R package HDclassif which is devoted to the clustering and the discriminant analysis of high-dimensional data. The classification methods proposed in the package result from a new parametrization of the Gaussian mixture model which combines the idea of dimension reduction and model constraints on the covariance matrices. The supervised classification method using this parametrization is called high dimensional discriminant analysis (HDDA. In a similar manner, the associated clustering method iscalled high dimensional data clustering (HDDC and uses the expectation-maximization algorithm for inference. In order to correctly t the data, both methods estimate the specific subspace and the intrinsic dimension of the groups. Due to the constraints on the covariance matrices, the number of parameters to estimate is significantly lower than other model-based methods and this allows the methods to be stable and efficient in high dimensions. Two introductory examples illustrated with R codes allow the user to discover the hdda and hddc functions. Experiments on simulated and real datasets also compare HDDC and HDDA with existing classification methods on high-dimensional datasets. HDclassif is a free software and distributed under the general public license, as part of the R software project.

  20. High Foot Implosion Experiments in Rugby Hohlraums

    Science.gov (United States)

    Ralph, Joseph; Leidinger, J.-P.; Callahan, D.; Kaiser, P.; Morice, O.; Marion, D.; Moody, J. D.; Ross, J. S.; Amendt, P.; Kritcher, A. L.; Milovich, J. L.; Strozzi, D.; Hinkel, D.; Michel, P.; Berzak Hopkins, L.; Pak, A.; Dewald, E. L.; Divol, L.; Khan, S.; Rygg, R.; Hurricane, O.; Lawrence Livermore National Lab Team; CEA/DAM Team

    2015-11-01

    The rugby hohlraum design is aimed at providing uniform x-ray drive on the capsule while minimizing the need for crossed beam energy transfer (CBET). As part of a series of experiments at the NIF using rugby hohlraums, design improvements in dual axis shock tuning experiments produced some of the most symmetric shocks measured on implosion experiments at the NIF. Additionally, tuning of the in-flight shell and hot spot shape have demonstrated that capsules can be tuned between oblate and prolate with measured velocities of nearly 340 km/s. However, these experimental measurements were accompanied by high levels of Stimulated Raman Scattering (SRS) that may result from the long inner beam path length, reamplification of the inner SRS by the outers, significant (CBET) or a combination of these. All rugby shots results were achieved with lower levels of hot electrons that can preheat the DT fuel layer for increased adiabat and reduced areal density. Detailed results from these experiments and those planned throughout the summer will be presented and compared with results obtained from cylindrical hohlraums. This work performed under the auspices of U.S. Department of Energy by Lawrence Livermore National Lab under Contract DE-AC52-07NA27344.

  1. Lithium decoration of three dimensional boron-doped graphene frameworks for high-capacity hydrogen storage

    International Nuclear Information System (INIS)

    Wang, Yunhui; Meng, Zhaoshun; Liu, Yuzhen; You, Dongsen; Wu, Kai; Lv, Jinchao; Wang, Xuezheng; Deng, Kaiming; Lu, Ruifeng; Rao, Dewei

    2015-01-01

    Based on density functional theory and the first principles molecular dynamics simulations, a three-dimensional B-doped graphene-interconnected framework has been constructed that shows good thermal stability even after metal loading. The average binding energy of adsorbed Li atoms on the proposed material (2.64 eV) is considerably larger than the cohesive energy per atom of bulk Li metal (1.60 eV). This value is ideal for atomically dispersed Li doping in experiments. From grand canonical Monte Carlo simulations, high hydrogen storage capacities of 5.9 wt% and 52.6 g/L in the Li-decorated material are attained at 298 K and 100 bars

  2. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  3. Development and assessment of multi-dimensional flow model in MARS compared with the RPI air-water experiment

    International Nuclear Information System (INIS)

    Lee, Seok Min; Lee, Un Chul; Bae, Sung Won; Chung, Bub Dong

    2004-01-01

    The Multi-Dimensional flow models in system code have been developed during the past many years. RELAP5-3D, CATHARE and TRACE has its specific multi-dimensional flow models and successfully applied it to the system safety analysis. In KAERI, also, MARS(Multi-dimensional Analysis of Reactor Safety) code was developed by integrating RELAP5/MOD3 code and COBRA-TF code. Even though COBRA-TF module can analyze three-dimensional flow models, it has a limitation to apply 3D shear stress dominant phenomena or cylindrical geometry. Therefore, Multi-dimensional analysis models are newly developed by implementing three-dimensional momentum flux and diffusion terms. The multi-dimensional model has been assessed compared with multi-dimensional conceptual problems and CFD code results. Although the assessment results were reasonable, the multi-dimensional model has not been validated to two-phase flow using experimental data. In this paper, the multi-dimensional air-water two-phase flow experiment was simulated and analyzed

  4. Numerical Experiments on Advective Transport in Large Three-Dimensional Discrete Fracture Networks

    Science.gov (United States)

    Makedonska, N.; Painter, S. L.; Karra, S.; Gable, C. W.

    2013-12-01

    Modeling of flow and solute transport in discrete fracture networks is an important approach for understanding the migration of contaminants in impermeable hard rocks such as granite, where fractures provide dominant flow and transport pathways. The discrete fracture network (DFN) model attempts to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. An integrated DFN meshing [1], flow, and particle tracking [2] simulation capability that enables accurate flow and particle tracking simulation on large DFNs has recently been developed. The new capability has been used in numerical experiments on advective transport in large DFNs with tens of thousands of fractures and millions of computational cells. The modeling procedure starts from the fracture network generation using a stochastic model derived from site data. A high-quality computational mesh is then generated [1]. Flow is then solved using the highly parallel PFLOTRAN [3] code. PFLOTRAN uses the finite volume approach, which is locally mass conserving and thus eliminates mass balance problems during particle tracking. The flow solver provides the scalar fluxes on each control volume face. From the obtained fluxes the Darcy velocity is reconstructed for each node in the network [4]. Velocities can then be continuously interpolated to any point in the domain of interest, thus enabling random walk particle tracking. In order to describe the flow field on fractures intersections, the control volume cells on intersections are split into four planar polygons, where each polygon corresponds to a piece of a fracture near the intersection line. Thus

  5. Diamond sensors for future high energy experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bachmair, Felix, E-mail: bachmair@phys.ethz.ch

    2016-09-21

    With the planned upgrade of the LHC to High-Luminosity-LHC [1], the general purpose experiments ATLAS and CMS are planning to upgrade their innermost tracking layers with more radiation tolerant technologies. Chemical Vapor Deposition CVD diamond is one such technology. CVD diamond sensors are an established technology as beam condition monitors in the highest radiation areas of all LHC experiments. The RD42-collaboration at CERN is leading the effort to use CVD diamond as a material for tracking detectors operating in extreme radiation environments. An overview of the latest developments from RD42 is presented including the present status of diamond sensor production, a study of pulse height dependencies on incident particle flux and the development of 3D diamond sensors.

  6. High Energy Antimatter Telescope (HEAT) Balloon Experiment

    Science.gov (United States)

    Beatty, J. J.

    1995-01-01

    This grant supported our work on the High Energy Antimatter Telescope(HEAT) balloon experiment. The HEAT payload is designed to perform a series of experiments focusing on the cosmic ray positron, electron, and antiprotons. Thus far two flights of the HEAT -e+/- configuration have taken place. During the period of this grant major accomplishments included the following: (1) Publication of the first results of the 1994 HEAT-e+/- flight in Physical Review Letters; (2) Successful reflight of the HEAT-e+/- payload from Lynn Lake in August 1995; (3) Repair and refurbishment of the elements of the HEAT payload damaged during the landing following the 1995 flight; and (4) Upgrade of the ground support equipment for future flights of the HEAT payload.

  7. Reactor G1: high power experiments; Experiences a forte puissance

    Energy Technology Data Exchange (ETDEWEB)

    Laage, F de; Teste du Baillet, A; Veyssiere, A; Wanner, G [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Retel, H [Societe Rateau, D.E.A. (France)

    1957-07-01

    The experiments carried out in the starting-up programme of the reactor G1 comprised a series of tests at high power, which allowed the following points to be studied: 1- Effect of poisoning by Xenon (absolute value, evolution). 2- Temperature coefficients of the uranium and graphite for a temperature distribution corresponding to heating by fission. 3- Effect of the pressure (due to the coiling system) on the reactivity. 4- Calibration of the security rods as a function of their position in the pile (1). 5- Temperature distribution of the graphite, the sheathing, the uranium and the air leaving the canals, in a pile running normally at high power. 6- Neutron flux distribution in a pile running normally at high power. 7- Determination of the power by nuclear and thermodynamic methods. These experiments have been carried out under two very different pile conditions. From the 1. to the 15. of August 1956, a series of power increases, followed by periods of stabilisation, were induced in a pile containing uranium only, in 457 canals, amounting to about 34 tons of fuel. A knowledge of the efficiency of the control rods in such a pile has made it possible to measure with good accuracy the principal effects at high temperatures, that is, to deal with points 1, 2, 3, 5. Flux charts giving information on the variations of the material Laplacian and extrapolation lengths in the reflector have been drawn up. Finally the thermodynamic power has been measured under good conditions, in spite of some installation difficulties. On September 16, the pile had its final charge of 100 tons. All the canals were loaded, 1,234 with uranium and 53 (i.e. exactly 4 per cent of the total number) with thorium uniformly distributed in a square lattice of 100 cm side. Since technical difficulties prevented the calibration of the control rods, the measurements were limited to the determination of the thermodynamic power and the temperature distributions (points 5 and 7). This report will

  8. High-resolution non-destructive three-dimensional imaging of integrated circuits

    Science.gov (United States)

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H. R.; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-01

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography—a high-resolution coherent diffractive imaging technique—can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  9. High-resolution non-destructive three-dimensional imaging of integrated circuits.

    Science.gov (United States)

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H R; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-15

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography-a high-resolution coherent diffractive imaging technique-can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  10. Hall MHD Modeling of Two-dimensional Reconnection: Application to MRX Experiment

    International Nuclear Information System (INIS)

    Lukin, V.S.; Jardin, S.C.

    2003-01-01

    Two-dimensional resistive Hall magnetohydrodynamics (MHD) code is used to investigate the dynamical evolution of driven reconnection in the Magnetic Reconnection Experiment (MRX). The initial conditions and dimensionless parameters of the simulation are set to be similar to the experimental values. We successfully reproduce many features of the time evolution of magnetic configurations for both co- and counter-helicity reconnection in MRX. The Hall effect is shown to be important during the early dynamic X-phase of MRX reconnection, while effectively negligible during the late ''steady-state'' Y-phase, when plasma heating takes place. Based on simple symmetry considerations, an experiment to directly measure the Hall effect in MRX configuration is proposed and numerical evidence for the expected outcome is given

  11. Computer experiments on dynamical cloud and space time fluctuations in one-dimensional meta-equilibrium plasmas

    International Nuclear Information System (INIS)

    Rouet, J.L.; Feix, M.R.

    1996-01-01

    The test particle picture is a central theory of weakly correlated plasma. While experiments and computer experiments have confirmed the validity of this theory at thermal equilibrium, the extension to meta-equilibrium distributions presents interesting and intriguing points connected to the under or over-population of the tail of these distributions (high velocity) which have not yet been tested. Moreover, the general dynamical Debye cloud (which is a generalization of the static Debye cloud supposing a plasma at thermal equilibrium and a test particle of zero velocity) for any test particle velocity and three typical velocity distributions (equilibrium plus two meta-equilibriums) are presented. The simulations deal with a one-dimensional two-component plasma and, moreover, the relevance of the check for real three-dimensional plasma is outlined. Two kinds of results are presented: the dynamical cloud itself and the more usual density (or energy) fluctuation spectrums. Special attention is paid to the behavior of long wavelengths which needs long systems with very small graininess effects and, consequently, sizable computation efforts. Finally, the divergence or absence of energy in the small wave numbers connected to the excess or lack of fast particles of the two above mentioned meta-equilibrium is exhibited. copyright 1996 American Institute of Physics

  12. High current beam transport experiments at GSI

    International Nuclear Information System (INIS)

    Klabunde, J.; Schonlein, A.; Spadtke, P.

    1985-01-01

    The status of the high current ion beam transport experiment is reported. 190 keV Ar 1+ ions were injected into six periods of a magnetic quadrupole channel. Since the pulse length is > 0.5 ms partial space charge neutralization occurs. In our experiments, the behavior of unneutralized and partially space charge compensated beams is compared. With an unneutralized beam, emittance growth has been measured for high intensities even in case of the zero-current phase advance sigma 0 0 . This initial emittance growth at high tune depression we attribute to the homogenization effect of the space charge density. An analytical formula based on this assumption describes the emittance growth very well. Furthermore the predicted envelope instabilities for sigma 0 > 90 0 were observed even after 6 periods. In agreement with the theory, unstable beam transport was also experimentally found if a beam with different emittances in the two transverse phase planes was injected into the transport channel. Although the space charge force is reduced for a partially neutralized beam a deterioration of the beam quality was measured in a certain range of beam parameters. Only in the range where an unneutralized beam shows the initial emittance growth, the partial neutralization reduces this effect, otherwise the partially neutralized beam is more unstable

  13. Sensitivity experiments with a one-dimensional coupled plume - iceflow model

    Science.gov (United States)

    Beckmann, Johanna; Perette, Mahé; Alexander, David; Calov, Reinhard; Ganopolski, Andrey

    2016-04-01

    Over the last few decades Greenland Ice sheet mass balance has become increasingly negative, caused by enhanced surface melting and speedup of the marine-terminating outlet glaciers at the ice sheet margins. Glaciers speedup has been related, among other factors, to enhanced submarine melting, which in turn is caused by warming of the surrounding ocean and less obviously, by increased subglacial discharge. While ice-ocean processes potentially play an important role in recent and future mass balance changes of the Greenland Ice Sheet, their physical understanding remains poorly understood. In this work we performed numerical experiments with a one-dimensional plume model coupled to a one-dimensional iceflow model. First we investigated the sensitivity of submarine melt rate to changes in ocean properties (ocean temperature and salinity), to the amount of subglacial discharge and to the glacier's tongue geometry itself. A second set of experiments investigates the response of the coupled model, i.e. the dynamical response of the outlet glacier to altered submarine melt, which results in new glacier geometry and updated melt rates.

  14. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    Energy Technology Data Exchange (ETDEWEB)

    Aly, A. [North Carolina State Univ., Raleigh, NC (United States); Avramova, Maria [North Carolina State Univ., Raleigh, NC (United States); Ivanov, Kostadin [Pennsylvania State Univ., University Park, PA (United States); Motta, Arthur [Pennsylvania State Univ., University Park, PA (United States); Lacroix, E. [Pennsylvania State Univ., University Park, PA (United States); Manera, Annalisa [Univ. of Michigan, Ann Arbor, MI (United States); Walter, D. [Univ. of Michigan, Ann Arbor, MI (United States); Williamson, R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gamble, K. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-10-29

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed by data from hydrogen experiments and PIE data.

  15. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    OpenAIRE

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data)...

  16. Recent experiments involving highly excited atoms

    International Nuclear Information System (INIS)

    Latimer, C.J.

    1979-01-01

    Very large and fragile atoms may be produced by exciting normal atoms with light or by collisions with other atomic particles. Atoms as large as 10 -6 m are now routinely produced in the laboratory and their properties studied. In this review some of the simpler experimental methods available for the production and detection of such atoms are described including tunable dye laser-excitation and field ionization. A few recent experiments which illustrate the collision properties and the effects of electric and and magnetic fields are also described. The relevance of highly excited atoms in other areas of research including radioastronomy and isotope separation are discussed. (author)

  17. Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement.

    Science.gov (United States)

    Lin, Hui; Gao, Jian; Mei, Qing; He, Yunbo; Liu, Junxiu; Wang, Xingjin

    2016-04-04

    It is a challenge for any optical method to measure objects with a large range of reflectivity variation across the surface. Image saturation results in incorrect intensities in captured fringe pattern images, leading to phase and measurement errors. This paper presents a new adaptive digital fringe projection technique which avoids image saturation and has a high signal to noise ratio (SNR) in the three-dimensional (3-D) shape measurement of objects that has a large range of reflectivity variation across the surface. Compared to previous high dynamic range 3-D scan methods using many exposures and fringe pattern projections, which consumes a lot of time, the proposed technique uses only two preliminary steps of fringe pattern projection and image capture to generate the adapted fringe patterns, by adaptively adjusting the pixel-wise intensity of the projected fringe patterns based on the saturated pixels in the captured images of the surface being measured. For the bright regions due to high surface reflectivity and high illumination by the ambient light and surfaces interreflections, the projected intensity is reduced just to be low enough to avoid image saturation. Simultaneously, the maximum intensity of 255 is used for those dark regions with low surface reflectivity to maintain high SNR. Our experiments demonstrate that the proposed technique can achieve higher 3-D measurement accuracy across a surface with a large range of reflectivity variation.

  18. Optimized set of two-dimensional experiments for fast sequential assignment, secondary structure determination, and backbone fold validation of 13C/15N-labelled proteins

    International Nuclear Information System (INIS)

    Bersch, Beate; Rossy, Emmanuel; Coves, Jacques; Brutscher, Bernhard

    2003-01-01

    NMR experiments are presented which allow backbone resonance assignment, secondary structure identification, and in favorable cases also molecular fold topology determination from a series of two-dimensional 1 H- 15 N HSQC-like spectra. The 1 H- 15 N correlation peaks are frequency shifted by an amount ± ω X along the 15 N dimension, where ω X is the C α , C β , or H α frequency of the same or the preceding residue. Because of the low dimensionality (2D) of the experiments, high-resolution spectra are obtained in a short overall experimental time. The whole series of seven experiments can be performed in typically less than one day. This approach significantly reduces experimental time when compared to the standard 3D-based methods. The here presented methodology is thus especially appealing in the context of high-throughput NMR studies of protein structure, dynamics or molecular interfaces

  19. Mitigating the Insider Threat Using High-Dimensional Search and Modeling

    National Research Council Canada - National Science Library

    Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago

    2006-01-01

    In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...

  20. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)

    2015-01-15

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  1. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data

  2. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    Science.gov (United States)

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  3. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2010-01-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii

  4. Sounding experiments of high pressure gas discharge

    International Nuclear Information System (INIS)

    Biele, Joachim K.

    1998-01-01

    A high pressure discharge experiment (200 MPa, 5·10 21 molecules/cm 3 , 3000 K) has been set up to study electrically induced shock waves. The apparatus consists of the combustion chamber (4.2 cm 3 ) to produce high pressure gas by burning solid propellant grains to fill the electrical pump chamber (2.5 cm 3 ) containing an insulated coaxial electrode. Electrical pump energy up to 7.8 kJ at 10 kV, which is roughly three times of the gas energy in the pump chamber, was delivered by a capacitor bank. From the current-voltage relationship the discharge develops at rapidly decreasing voltage. Pressure at the combustion chamber indicating significant underpressure as well as overpressure peaks is followed by an increase of static pressure level. These data are not yet completely understood. However, Lorentz forces are believed to generate pinching with subsequent pinch heating, resulting in fast pressure variations to be propagated as rarefaction and shock waves, respectively. Utilizing pure axisymmetric electrode initiation rather than often used exploding wire technology in the pump chamber, repeatable experiments were achieved

  5. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  6. Miniature robust five-dimensional fingertip force/torque sensor with high performance

    International Nuclear Information System (INIS)

    Liang, Qiaokang; Huang, Xiuxiang; Li, Zhongyang; Zhang, Dan; Ge, Yunjian

    2011-01-01

    This paper proposes an innovative design and investigation for a five-dimensional fingertip force/torque sensor with a dual annular diaphragm. This sensor can be applied to a robot hand to measure forces along the X-, Y- and Z-axes (F x , F y and F z ) and moments about the X- and Y-axes (M x and M y ) simultaneously. Particularly, the details of the sensing principle, the structural design and the overload protection mechanism are presented. Afterward, based on the design of experiments approach provided by the software ANSYS®, a finite element analysis and an optimization design are performed. These are performed with the objective of achieving both high sensitivity and stiffness of the sensor. Furthermore, static and dynamic calibrations based on the neural network method are carried out. Finally, an application of the developed sensor on a dexterous robot hand is demonstrated. The results of calibration experiments and the application show that the developed sensor possesses high performance and robustness

  7. Multi-SOM: an Algorithm for High-Dimensional, Small Size Datasets

    Directory of Open Access Journals (Sweden)

    Shen Lu

    2013-04-01

    Full Text Available Since it takes time to do experiments in bioinformatics, biological datasets are sometimes small but with high dimensionality. From probability theory, in order to discover knowledge from a set of data, we have to have a sufficient number of samples. Otherwise, the error bounds can become too large to be useful. For the SOM (Self- Organizing Map algorithm, the initial map is based on the training data. In order to avoid the bias caused by the insufficient training data, in this paper we present an algorithm, called Multi-SOM. Multi-SOM builds a number of small self-organizing maps, instead of just one big map. Bayesian decision theory is used to make the final decision among similar neurons on different maps. In this way, we can better ensure that we can get a real random initial weight vector set, the map size is less of consideration and errors tend to average out. In our experiments as applied to microarray datasets which are highly intense data composed of genetic related information, the precision of Multi-SOMs is 10.58% greater than SOMs, and its recall is 11.07% greater than SOMs. Thus, the Multi-SOMs algorithm is practical.

  8. High dimensional biological data retrieval optimization with NoSQL technology

    Science.gov (United States)

    2014-01-01

    Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data

  9. High dimensional biological data retrieval optimization with NoSQL technology.

    Science.gov (United States)

    Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike

    2014-01-01

    High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating

  10. Numerical experiment on different validation cases of water coolant flow in supercritical pressure test sections assisted by discriminated dimensional analysis part I: the dimensional analysis

    International Nuclear Information System (INIS)

    Kiss, A.; Aszodi, A.

    2011-01-01

    As recent studies prove in contrast to 'classical' dimensional analysis, whose application is widely described in heat transfer textbooks despite its poor results, the less well known and used discriminated dimensional analysis approach can provide a deeper insight into the physical problems involved and much better results in all cases where it is applied. As a first step of this ongoing research discriminated dimensional analysis has been performed on supercritical pressure water pipe flow heated through the pipe solid wall to identify the independent dimensionless groups (which play an independent role in the above mentioned thermal hydraulic phenomena) in order to serve a theoretical base to comparison between well known supercritical pressure water pipe heat transfer experiments and results of their validated CFD simulations. (author)

  11. Reactor G1: high power experiments

    International Nuclear Information System (INIS)

    Laage, F. de; Teste du Baillet, A.; Veyssiere, A.; Wanner, G.

    1957-01-01

    The experiments carried out in the starting-up programme of the reactor G1 comprised a series of tests at high power, which allowed the following points to be studied: 1- Effect of poisoning by Xenon (absolute value, evolution). 2- Temperature coefficients of the uranium and graphite for a temperature distribution corresponding to heating by fission. 3- Effect of the pressure (due to the coiling system) on the reactivity. 4- Calibration of the security rods as a function of their position in the pile (1). 5- Temperature distribution of the graphite, the sheathing, the uranium and the air leaving the canals, in a pile running normally at high power. 6- Neutron flux distribution in a pile running normally at high power. 7- Determination of the power by nuclear and thermodynamic methods. These experiments have been carried out under two very different pile conditions. From the 1. to the 15. of August 1956, a series of power increases, followed by periods of stabilisation, were induced in a pile containing uranium only, in 457 canals, amounting to about 34 tons of fuel. A knowledge of the efficiency of the control rods in such a pile has made it possible to measure with good accuracy the principal effects at high temperatures, that is, to deal with points 1, 2, 3, 5. Flux charts giving information on the variations of the material Laplacian and extrapolation lengths in the reflector have been drawn up. Finally the thermodynamic power has been measured under good conditions, in spite of some installation difficulties. On September 16, the pile had its final charge of 100 tons. All the canals were loaded, 1,234 with uranium and 53 (i.e. exactly 4 per cent of the total number) with thorium uniformly distributed in a square lattice of 100 cm side. Since technical difficulties prevented the calibration of the control rods, the measurements were limited to the determination of the thermodynamic power and the temperature distributions (points 5 and 7). This report will

  12. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    Science.gov (United States)

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  13. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Hongchao Song

    2017-01-01

    Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  14. Two-Dimensional High Definition Versus Three-Dimensional Endoscopy in Endonasal Skull Base Surgery: A Comparative Preclinical Study.

    Science.gov (United States)

    Rampinelli, Vittorio; Doglietto, Francesco; Mattavelli, Davide; Qiu, Jimmy; Raffetti, Elena; Schreiber, Alberto; Villaret, Andrea Bolzoni; Kucharczyk, Walter; Donato, Francesco; Fontanella, Marco Maria; Nicolai, Piero

    2017-09-01

    Three-dimensional (3D) endoscopy has been recently introduced in endonasal skull base surgery. Only a relatively limited number of studies have compared it to 2-dimensional, high definition technology. The objective was to compare, in a preclinical setting for endonasal endoscopic surgery, the surgical maneuverability of 2-dimensional, high definition and 3D endoscopy. A group of 68 volunteers, novice and experienced surgeons, were asked to perform 2 tasks, namely simulating grasping and dissection surgical maneuvers, in a model of the nasal cavities. Time to complete the tasks was recorded. A questionnaire to investigate subjective feelings during tasks was filled by each participant. In 25 subjects, the surgeons' movements were continuously tracked by a magnetic-based neuronavigator coupled with dedicated software (ApproachViewer, part of GTx-UHN) and the recorded trajectories were analyzed by comparing jitter, sum of square differences, and funnel index. Total execution time was significantly lower with 3D technology (P < 0.05) in beginners and experts. Questionnaires showed that beginners preferred 3D endoscopy more frequently than experts. A minority (14%) of beginners experienced discomfort with 3D endoscopy. Analysis of jitter showed a trend toward increased effectiveness of surgical maneuvers with 3D endoscopy. Sum of square differences and funnel index analyses documented better values with 3D endoscopy in experts. In a preclinical setting for endonasal skull base surgery, 3D technology appears to confer an advantage in terms of time of execution and precision of surgical maneuvers. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Introduction to the conformational investigation of peptides and proteins by using two-dimensional proton NMR experiments

    International Nuclear Information System (INIS)

    Neumann, J.M.; Macquaire, F.

    1991-01-01

    This report presents the elementary bases for an initiation to the conformational study of peptides and proteins by using two-dimensional proton NMR experiments. First, some general features of protein structures are summarized. A second chapter is devoted to the basic NMR experiments and to the spectral parameters which provide a structural information. This description is illustrated by NMR spectra of peptides. The third chapter concerns the most standard two-dimensional proton NMR experiments and their use for a conformational study of peptides and proteins. Lastly, an example of NMR structural investigation of a peptide is reported [fr

  16. Three-dimensional turbulent swirling flow in a cylinder: Experiments and computations

    International Nuclear Information System (INIS)

    Gupta, Amit; Kumar, Ranganathan

    2007-01-01

    Dynamics of the three-dimensional flow in a cyclone with tangential inlet and tangential exit were studied using particle tracking velocimetry (PTV) and a three-dimensional computational model. The PTV technique is described in this paper and appears to be well suited for the current flow situation. The flow was helical in nature and a secondary recirculating flow was observed and well predicted by computations using the RNG k-ε turbulence model. The secondary flow was characterized by a single vortex which circulated around the axis and occupied a large fraction of the cylinder diameter. The locus of the vortex center meandered around the cylinder axis, making one complete revolution for a cylinder aspect ratio of 2. Tangential velocities from both experiments and computations were compared and found to be in good agreement. The general structure of the flow does not vary significantly as the Reynolds number is increased. However, slight changes in all components of velocity and pressure were seen as the inlet velocity is increased. By increasing the inlet aspect ratio it was observed that the vortex meandering changed significantly

  17. Three-dimensional turbulent swirling flow in a cylinder: Experiments and computations

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, Amit [Department of Mechanical, Materials and Aerospace Engineering, University of Central Florida, Orlando, FL 32816 (United States); Kumar, Ranganathan [Department of Mechanical, Materials and Aerospace Engineering, University of Central Florida, Orlando, FL 32816 (United States)]. E-mail: rnkumar@mail.ucf.edu

    2007-04-15

    Dynamics of the three-dimensional flow in a cyclone with tangential inlet and tangential exit were studied using particle tracking velocimetry (PTV) and a three-dimensional computational model. The PTV technique is described in this paper and appears to be well suited for the current flow situation. The flow was helical in nature and a secondary recirculating flow was observed and well predicted by computations using the RNG k-{epsilon} turbulence model. The secondary flow was characterized by a single vortex which circulated around the axis and occupied a large fraction of the cylinder diameter. The locus of the vortex center meandered around the cylinder axis, making one complete revolution for a cylinder aspect ratio of 2. Tangential velocities from both experiments and computations were compared and found to be in good agreement. The general structure of the flow does not vary significantly as the Reynolds number is increased. However, slight changes in all components of velocity and pressure were seen as the inlet velocity is increased. By increasing the inlet aspect ratio it was observed that the vortex meandering changed significantly.

  18. Experiment and modeling of paired effect on evacuation from a three-dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Jun, Hu [MOE Key Laboratory for Urban Transportation Complex Systems Theory and Technology, Beijing Jiaotong University, Beijing 100044 (China); School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044 (China); Faculty of Computer Science, Chengdu Normal University, Chengdu 611130 (China); Huijun, Sun, E-mail: hjsun1@bjtu.edu.cn [MOE Key Laboratory for Urban Transportation Complex Systems Theory and Technology, Beijing Jiaotong University, Beijing 100044 (China); School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044 (China); Juan, Wei [Faculty of Computer Science, Chengdu Normal University, Chengdu 611130 (China); Xiaodan, Chen [College of Information Science and Technology, Chengdu University, Chengdu 610106 (China); Lei, You [Faculty of Computer Science, Chengdu Normal University, Chengdu 611130 (China); College of Information Science and Technology, Chengdu University, Chengdu 610106 (China); Musong, Gu [Faculty of Computer Science, Chengdu Normal University, Chengdu 611130 (China)

    2014-10-24

    A novel three-dimensional cellular automata evacuation model was proposed based on stairs factor for paired effect and variety velocities in pedestrian evacuation. In the model pedestrians' moving probability of target position at the next moment was defined based on distance profit and repulsive force profit, and evacuation strategy was elaborated in detail through analyzing variety velocities and repulsive phenomenon in moving process. At last, experiments with the simulation platform were conducted to study the relationships of evacuation time, average velocity and pedestrian velocity. The results showed that when the ratio of single pedestrian was higher in the system, the shortest route strategy was good for improving evacuation efficiency; in turn, if ratio of paired pedestrians was higher, it is good for improving evacuation efficiency to adopt strategy that avoided conflicts, and priority should be given to scattered evacuation. - Highlights: • A novel three-dimensional evacuation model was presented with stair factor. • The paired effect and variety velocities were considered in evacuation model. • The cellular automata model is improved by repulsive force.

  19. Preliminary three-dimensional potential flow simulation of a five-liter flask air injection experiment

    International Nuclear Information System (INIS)

    Davis, J.E.

    1977-01-01

    The preliminary results of an unsteady three-dimensional potential flow analysis of a five-liter flask air injection experiment (small-scale model simulation of a nuclear reactor steam condensation system) are presented. The location and velocity of the free water surface in the flask as a function of time are determined during pipe venting and bubble expansion processes. The analyses were performed using an extended version of the NASA-Ames Three-Dimensional Potential Flow Analysis System (POTFAN), which uses the vortex lattice singularity method of potential flow analysis. The pressure boundary condition at the free water surface and the boundary condition along the free jet boundary near the pipe exit were ignored for the purposes of the present study. The results of the analysis indicate that large time steps can be taken without significantly reducing the accuracy of the solutions and that the assumption of inviscid flow should not have an appreciable effect on the geometry and velocity of the free water surface. In addition, the computation time required for the solutions was well within acceptable limits

  20. Experiment and modeling of paired effect on evacuation from a three-dimensional space

    International Nuclear Information System (INIS)

    Jun, Hu; Huijun, Sun; Juan, Wei; Xiaodan, Chen; Lei, You; Musong, Gu

    2014-01-01

    A novel three-dimensional cellular automata evacuation model was proposed based on stairs factor for paired effect and variety velocities in pedestrian evacuation. In the model pedestrians' moving probability of target position at the next moment was defined based on distance profit and repulsive force profit, and evacuation strategy was elaborated in detail through analyzing variety velocities and repulsive phenomenon in moving process. At last, experiments with the simulation platform were conducted to study the relationships of evacuation time, average velocity and pedestrian velocity. The results showed that when the ratio of single pedestrian was higher in the system, the shortest route strategy was good for improving evacuation efficiency; in turn, if ratio of paired pedestrians was higher, it is good for improving evacuation efficiency to adopt strategy that avoided conflicts, and priority should be given to scattered evacuation. - Highlights: • A novel three-dimensional evacuation model was presented with stair factor. • The paired effect and variety velocities were considered in evacuation model. • The cellular automata model is improved by repulsive force

  1. Flavour Physics with High-Luminosity Experiments

    CERN Document Server

    2016-01-01

    With the first dedicated B-factory experiments BaBar (USA) and BELLE (Japan) Flavour Physics has entered the phase of precision physics. LHCb (CERN) and the high luminosity extension of KEK-B together with the state of the art BELLE II detector will further push this precision frontier. Progress in this field always relied on close cooperation between experiment and theory, as extraction of fundamental parameters often is very indirect. To extract the full physics information from existing and future data, this cooperation must be further intensified. This MIAPP programme aims in particular to prepare for this task by joining experimentalists and theorists in the various relevant fields, with the goal to build the necessary tools in face of the challenge of new large data sets. The programme will begin with a focus on physics with non-leptonic final states, continued by semileptonic B meson decays and Tau decays, and on various aspects of CP symmetry violation closer to the end. In addition, in the final ...

  2. High temperature superconductivity space experiment (HTSSE)

    International Nuclear Information System (INIS)

    Nisenoff, M.; Gubser, D.V.; Wolf, S.A.; Ritter, J.C.; Price, G.

    1991-01-01

    The Naval Research Laboratory (NRL) is exploring the feasibility of deploying high temperature superconductivity (HTS) devices and components in space. A variety of devices, primarily passive microwave and millimeter wave components, have been procured and will be integrated with a cryogenic refrigerator system and data acquisition system to form the space package, which will be launched late in 1992. This Space Experiment will demonstrate that this technology is sufficiently robust to survive the space environment and has the potential to significantly improved space communications systems. The devices for the initial launch (HTSSE-I) have been received by NRL and evaluated electrically, thermally and mechanically and will be integrated into the final space package early in 1991. In this paper the performance of the devices are summarized and some potential applications of HTS technology in space system are outlined

  3. High temperature experiment for accelerator inertial fusion

    International Nuclear Information System (INIS)

    Lee, E.P.

    1985-01-01

    The High Temperature Experiment (HTE) is intended to produce temperatures of 50-100 eV in solid density targets driven by heavy ion beams from a multiple beam induction linac. The fundamental variables (particle species, energy number of beamlets, current and pulse length) must be fixed to achieve the temperature at minimum cost, subject to criteria of technical feasibility and relevance to the development of a Fusion Driver. The conceptual design begins with an assumed (radiation-limited) target temperature and uses limitations due to particle range, beamlet perveance, and target disassembly to bound the allowable values of mass number (A) and energy (E). An accelerator model is then applied to determine the minimum length accelerator, which is a guide to total cost. The accelerator model takes into account limits on transportable charge, maximum gradient, core mass per linear meter, and head-to-tail momentum variation within a pulse

  4. Diamond detectors for high energy physics experiments

    Science.gov (United States)

    Bäni, L.; Alexopoulos, A.; Artuso, M.; Bachmair, F.; Bartosik, M.; Beacham, J.; Beck, H.; Bellini, V.; Belyaev, V.; Bentele, B.; Berdermann, E.; Bergonzo, P.; Bes, A.; Brom, J.-M.; Bruzzi, M.; Cerv, M.; Chiodini, G.; Chren, D.; Cindro, V.; Claus, G.; Collot, J.; Cumalat, J.; Dabrowski, A.; D'Alessandro, R.; Dauvergne, D.; de Boer, W.; Dorfer, C.; Dünser, M.; Eremin, V.; Eusebi, R.; Forcolin, G.; Forneris, J.; Frais-Kölbl, H.; Gallin-Martel, L.; Gallin-Martel, M. L.; Gan, K. K.; Gastal, M.; Giroletti, C.; Goffe, M.; Goldstein, J.; Golubev, A.; Gorišek, A.; Grigoriev, E.; Grosse-Knetter, J.; Grummer, A.; Gui, B.; Guthoff, M.; Haughton, I.; Hiti, B.; Hits, D.; Hoeferkamp, M.; Hofmann, T.; Hosslet, J.; Hostachy, J.-Y.; Hügging, F.; Hutton, C.; Jansen, H.; Janssen, J.; Kagan, H.; Kanxheri, K.; Kasieczka, G.; Kass, R.; Kassel, F.; Kis, M.; Konovalov, V.; Kramberger, G.; Kuleshov, S.; Lacoste, A.; Lagomarsino, S.; Lo Giudice, A.; Lukosi, E.; Maazouzi, C.; Mandic, I.; Mathieu, C.; Menichelli, M.; Mikuž, M.; Morozzi, A.; Moss, J.; Mountain, R.; Murphy, S.; Muškinja, M.; Oh, A.; Oliviero, P.; Passeri, D.; Pernegger, H.; Perrino, R.; Picollo, F.; Pomorski, M.; Potenza, R.; Quadt, A.; Re, A.; Reichmann, M.; Riley, G.; Roe, S.; Sanz, D.; Scaringella, M.; Schaefer, D.; Schmidt, C. J.; Schnetzer, S.; Sciortino, S.; Scorzoni, A.; Seidel, S.; Servoli, L.; Smith, S.; Sopko, B.; Sopko, V.; Spagnolo, S.; Spanier, S.; Stenson, K.; Stone, R.; Sutera, C.; Tannenwald, B.; Taylor, A.; Traeger, M.; Tromson, D.; Trischuk, W.; Tuve, C.; Uplegger, L.; Velthuis, J.; Venturi, N.; Vittone, E.; Wagner, S.; Wallny, R.; Wang, J. C.; Weingarten, J.; Weiss, C.; Wengler, T.; Wermes, N.; Yamouni, M.; Zavrtanik, M.

    2018-01-01

    Beam test results of the radiation tolerance study of chemical vapour deposition (CVD) diamond against different particle species and energies is presented. We also present beam test results on the independence of signal size on incident particle rate in charged particle detectors based on un-irradiated and irradiated poly-crystalline CVD diamond over a range of particle fluxes from 2 kHz/cm2 to 10 MHz/cm2. The pulse height of the sensors was measured with readout electronics with a peaking time of 6 ns. In addition functionality of poly-crystalline CVD diamond 3D devices was demonstrated in beam tests and 3D diamond detectors are shown to be a promising technology for applications in future high luminosity experiments.

  5. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  6. Five and four dimensional experiments for robust backbone resonance assignment of large intrinsically disordered proteins: application to Tau3x protein

    International Nuclear Information System (INIS)

    Żerko, Szymon; Byrski, Piotr; Włodarczyk-Pruszyński, Paweł; Górka, Michał; Ledolter, Karin; Masliah, Eliezer; Konrat, Robert; Koźmiński, Wiktor

    2016-01-01

    New experiments dedicated for large IDPs backbone resonance assignment are presented. The most distinctive feature of all described techniques is the employment of MOCCA-XY16 mixing sequences to obtain effective magnetization transfers between carbonyl carbon backbone nuclei. The proposed 4 and 5 dimensional experiments provide a high dispersion of obtained signals making them suitable for use in the case of large IDPs (application to 354 a. a. residues of Tau protein 3x isoform is presented) as well as provide both forward and backward connectivities. What is more, connecting short chains interrupted with proline residues is also possible. All the experiments employ non-uniform sampling.

  7. Five and four dimensional experiments for robust backbone resonance assignment of large intrinsically disordered proteins: application to Tau3x protein

    Energy Technology Data Exchange (ETDEWEB)

    Żerko, Szymon; Byrski, Piotr; Włodarczyk-Pruszyński, Paweł; Górka, Michał [University of Warsaw, Faculty of Chemistry, Biological and Chemical Research Centre (Poland); Ledolter, Karin [University of Vienna, Department of Computational and Structural Biology, Max F. Perutz Laboratories (Austria); Masliah, Eliezer [University of California, San Diego, Departments of Neuroscience and Pathology (United States); Konrat, Robert [University of Vienna, Department of Computational and Structural Biology, Max F. Perutz Laboratories (Austria); Koźmiński, Wiktor, E-mail: kozmin@chem.uw.edu.pl [University of Warsaw, Faculty of Chemistry, Biological and Chemical Research Centre (Poland)

    2016-08-15

    New experiments dedicated for large IDPs backbone resonance assignment are presented. The most distinctive feature of all described techniques is the employment of MOCCA-XY16 mixing sequences to obtain effective magnetization transfers between carbonyl carbon backbone nuclei. The proposed 4 and 5 dimensional experiments provide a high dispersion of obtained signals making them suitable for use in the case of large IDPs (application to 354 a. a. residues of Tau protein 3x isoform is presented) as well as provide both forward and backward connectivities. What is more, connecting short chains interrupted with proline residues is also possible. All the experiments employ non-uniform sampling.

  8. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    Science.gov (United States)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  9. High dimensional and high resolution pulse sequences for backbone resonance assignment of intrinsically disordered proteins

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor, E-mail: kozmin@chem.uw.edu.pl [University of Warsaw, Faculty of Chemistry (Poland); Sanderova, Hana; Krasny, Libor [Institute of Microbiology, Academy of Sciences of the Czech Republic, Laboratory of Molecular Genetics of Bacteria, Department of Bacteriology (Czech Republic)

    2012-04-15

    Four novel 5D (HACA(N)CONH, HNCOCACB, (HACA)CON(CA)CONH, (H)NCO(NCA)CONH), and one 6D ((H)NCO(N)CACONH) NMR pulse sequences are proposed. The new experiments employ non-uniform sampling that enables achieving high resolution in indirectly detected dimensions. The experiments facilitate resonance assignment of intrinsically disordered proteins. The novel pulse sequences were successfully tested using {delta} subunit (20 kDa) of Bacillus subtilis RNA polymerase that has an 81-amino acid disordered part containing various repetitive sequences.

  10. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  11. High gain requirements and high field Tokamak experiments

    International Nuclear Information System (INIS)

    Cohn, D.R.

    1994-01-01

    Operation at sufficiently high gain (ratio of fusion power to external heating power) is a fundamental requirement for tokamak power reactors. For typical reactor concepts, the gain is greater than 25. Self-heating from alpha particles in deuterium-tritium plasmas can greatly reduce ητ/temperature requirements for high gain. A range of high gain operating conditions is possible with different values of alpha-particle efficiency (fraction of alpha-particle power that actually heats the plasma) and with different ratios of self heating to external heating. At one extreme, there is ignited operation, where all of the required plasma heating is provided by alpha particles and the alpha-particle efficiency is 100%. At the other extreme, there is the case of no heating contribution from alpha particles. ητ/temperature requirements for high gain are determined as a function of alpha-particle heating efficiency. Possibilities for high gain experiments in deuterium-tritium, deuterium, and hydrogen plasmas are discussed

  12. HASE: Framework for efficient high-dimensional association analyses

    NARCIS (Netherlands)

    G.V. Roshchupkin (Gennady); H.H.H. Adams (Hieab); M.W. Vernooij (Meike); A. Hofman (Albert); C.M. van Duijn (Cornelia); M.K. Ikram (Kamran); W.J. Niessen (Wiro)

    2016-01-01

    textabstractHigh-throughput technology can now provide rich information on a person's biological makeup and environmental surroundings. Important discoveries have been made by relating these data to various health outcomes in fields such as genomics, proteomics, and medical imaging. However,

  13. HASE : Framework for efficient high-dimensional association analyses

    NARCIS (Netherlands)

    Roshchupkin, G. V.; Adams, H; Vernooij, Meike W.; Hofman, A; Van Duijn, C. M.; Ikram, M. Arfan; Niessen, W.J.

    2016-01-01

    High-throughput technology can now provide rich information on a person's biological makeup and environmental surroundings. Important discoveries have been made by relating these data to various health outcomes in fields such as genomics, proteomics, and medical imaging. However,

  14. Sensitivity studies and a simple ozone perturbation experiment with a truncated two-dimensional model of the stratosphere

    Science.gov (United States)

    Stordal, Frode; Garcia, Rolando R.

    1987-01-01

    The 1-1/2-D model of Holton (1986), which is actually a highly truncated two-dimensional model, describes latitudinal variations of tracer mixing ratios in terms of their projections onto second-order Legendre polynomials. The present study extends the work of Holton by including tracers with photochemical production in the stratosphere (O3 and NOy). It also includes latitudinal variations in the photochemical sources and sinks, improving slightly the calculated global mean profiles for the long-lived tracers studied by Holton and improving substantially the latitudinal behavior of ozone. Sensitivity tests of the dynamical parameters in the model are performed, showing that the response of the model to changes in vertical residual meridional winds and horizontal diffusion coefficients is similar to that of a full two-dimensional model. A simple ozone perturbation experiment shows the model's ability to reproduce large-scale latitudinal variations in total ozone column depletions as well as ozone changes in the chemically controlled upper stratosphere.

  15. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  16. Secure data storage by three-dimensional absorbers in highly scattering volume medium

    International Nuclear Information System (INIS)

    Matoba, Osamu; Matsuki, Shinichiro; Nitta, Kouichi

    2008-01-01

    A novel data storage in a volume medium with highly scattering coefficient is proposed for data security application. Three-dimensional absorbers are used as data. These absorbers can not be measured by interferometer when the scattering in a volume medium is strong enough. We present a method to reconstruct three-dimensional absorbers and present numerical results to show the effectiveness of the proposed data storage.

  17. TripAdvisor^{N-D}: A Tourism-Inspired High-Dimensional Space Exploration Framework with Overview and Detail.

    Science.gov (United States)

    Nam, Julia EunJu; Mueller, Klaus

    2013-02-01

    Gaining a true appreciation of high-dimensional space remains difficult since all of the existing high-dimensional space exploration techniques serialize the space travel in some way. This is not so foreign to us since we, when traveling, also experience the world in a serial fashion. But we typically have access to a map to help with positioning, orientation, navigation, and trip planning. Here, we propose a multivariate data exploration tool that compares high-dimensional space navigation with a sightseeing trip. It decomposes this activity into five major tasks: 1) Identify the sights: use a map to identify the sights of interest and their location; 2) Plan the trip: connect the sights of interest along a specifyable path; 3) Go on the trip: travel along the route; 4) Hop off the bus: experience the location, look around, zoom into detail; and 5) Orient and localize: regain bearings in the map. We describe intuitive and interactive tools for all of these tasks, both global navigation within the map and local exploration of the data distributions. For the latter, we describe a polygonal touchpad interface which enables users to smoothly tilt the projection plane in high-dimensional space to produce multivariate scatterplots that best convey the data relationships under investigation. Motion parallax and illustrative motion trails aid in the perception of these transient patterns. We describe the use of our system within two applications: 1) the exploratory discovery of data configurations that best fit a personal preference in the presence of tradeoffs and 2) interactive cluster analysis via cluster sculpting in N-D.

  18. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    Science.gov (United States)

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R

  19. High Speed Water Sterilization Using One-Dimensional Nanostructures

    KAUST Repository

    Schoen, David T.; Schoen, Alia P.; Hu, Liangbing; Kim, Han Sun; Heilshorn, Sarah C.; Cui, Yi

    2010-01-01

    The removal of bacteria and other organisms from water is an extremely important process, not only for drinking and sanitation but also industrially as biofouling is a commonplace and serious problem. We here present a textile based multiscale device for the high speed electrical sterilization of water using silver nanowires, carbon nanotubes, and cotton. This approach, which combines several materials spanning three very different length scales with simple dying based fabrication, makes a gravity fed device operating at 100000 L/(h m2) which can inactivate >98% of bacteria with only several seconds of total incubation time. This excellent performance is enabled by the use of an electrical mechanism rather than size exclusion, while the very high surface area of the device coupled with large electric field concentrations near the silver nanowire tips allows for effective bacterial inactivation. © 2010 American Chemical Society.

  20. High Speed Water Sterilization Using One-Dimensional Nanostructures

    KAUST Repository

    Schoen, David T.

    2010-09-08

    The removal of bacteria and other organisms from water is an extremely important process, not only for drinking and sanitation but also industrially as biofouling is a commonplace and serious problem. We here present a textile based multiscale device for the high speed electrical sterilization of water using silver nanowires, carbon nanotubes, and cotton. This approach, which combines several materials spanning three very different length scales with simple dying based fabrication, makes a gravity fed device operating at 100000 L/(h m2) which can inactivate >98% of bacteria with only several seconds of total incubation time. This excellent performance is enabled by the use of an electrical mechanism rather than size exclusion, while the very high surface area of the device coupled with large electric field concentrations near the silver nanowire tips allows for effective bacterial inactivation. © 2010 American Chemical Society.

  1. One-dimensional model for QCD at high energy

    International Nuclear Information System (INIS)

    Iancu, E.; Santana Amaral, J.T. de; Soyez, G.; Triantafyllopoulos, D.N.

    2007-01-01

    We propose a stochastic particle model in (1+1) dimensions, with one dimension corresponding to rapidity and the other one to the transverse size of a dipole in QCD, which mimics high-energy evolution and scattering in QCD in the presence of both saturation and particle-number fluctuations, and hence of pomeron loops. The model evolves via non-linear particle splitting, with a non-local splitting rate which is constrained by boost-invariance and multiple scattering. The splitting rate saturates at high density, so like the gluon emission rate in the JIMWLK evolution. In the mean field approximation obtained by ignoring fluctuations, the model exhibits the hallmarks of the BK equation, namely a BFKL-like evolution at low density, the formation of a traveling wave, and geometric scaling. In the full evolution including fluctuations, the geometric scaling is washed out at high energy and replaced by diffusive scaling. It is likely that the model belongs to the universality class of the reaction-diffusion process. The analysis of the model sheds new light on the pomeron loops equations in QCD and their possible improvements

  2. High counting rate, two-dimensional position sensitive timing RPC

    CERN Document Server

    Petrovici, M.; Simion, V; Bartos, D; Caragheorgheopol, G; Deppner, I; Adamczewski-Musch, J; Linev, S; Williams, MCS; Loizeau, P; Herrmann, N; Doroud, K; Radulescu, L; Constantin, F

    2012-01-01

    Resistive Plate Chambers (RPCs) are widely employed as muon trigger systems at the Large Hadron Collider (LHC) experiments. Their large detector volume and the use of a relatively expensive gas mixture make a closed-loop gas circulation unavoidable. The return gas of RPCs operated in conditions similar to the experimental background foreseen at LHC contains large amount of impurities potentially dangerous for long-term operation. Several gas-cleaning agents, characterized during the past years, are currently in use. New test allowed understanding of the properties and performance of a large number of purifiers. On that basis, an optimal combination of different filters consisting of Molecular Sieve (MS) 5Å and 4Å, and a Cu catalyst R11 has been chosen and validated irradiating a set of RPCs at the CERN Gamma Irradiation Facility (GIF) for several years. A very important feature of this new configuration is the increase of the cycle duration for each purifier, which results in better system stabilit...

  3. Self-dissimilarity as a High Dimensional Complexity Measure

    Science.gov (United States)

    Wolpert, David H.; Macready, William

    2005-01-01

    For many systems characterized as "complex" the patterns exhibited on different scales differ markedly from one another. For example the biomass distribution in a human body "looks very different" depending on the scale at which one examines it. Conversely, the patterns at different scales in "simple" systems (e.g., gases, mountains, crystals) vary little from one scale to another. Accordingly, the degrees of self-dissimilarity between the patterns of a system at various scales constitute a complexity "signature" of that system. Here we present a novel quantification of self-dissimilarity. This signature can, if desired, incorporate a novel information-theoretic measure of the distance between probability distributions that we derive here. Whatever distance measure is chosen, our quantification of self-dissimilarity can be measured for many kinds of real-world data. This allows comparisons of the complexity signatures of wholly different kinds of systems (e.g., systems involving information density in a digital computer vs. species densities in a rain-forest vs. capital density in an economy, etc.). Moreover, in contrast to many other suggested complexity measures, evaluating the self-dissimilarity of a system does not require one to already have a model of the system. These facts may allow self-dissimilarity signatures to be used a s the underlying observational variables of an eventual overarching theory relating all complex systems. To illustrate self-dissimilarity we present several numerical experiments. In particular, we show that underlying structure of the logistic map is picked out by the self-dissimilarity signature of time series produced by that map

  4. An angle-based subspace anomaly detection approach to high-dimensional data: With an application to industrial fault detection

    International Nuclear Information System (INIS)

    Zhang, Liangwei; Lin, Jing; Karim, Ramin

    2015-01-01

    The accuracy of traditional anomaly detection techniques implemented on full-dimensional spaces degrades significantly as dimensionality increases, thereby hampering many real-world applications. This work proposes an approach to selecting meaningful feature subspace and conducting anomaly detection in the corresponding subspace projection. The aim is to maintain the detection accuracy in high-dimensional circumstances. The suggested approach assesses the angle between all pairs of two lines for one specific anomaly candidate: the first line is connected by the relevant data point and the center of its adjacent points; the other line is one of the axis-parallel lines. Those dimensions which have a relatively small angle with the first line are then chosen to constitute the axis-parallel subspace for the candidate. Next, a normalized Mahalanobis distance is introduced to measure the local outlier-ness of an object in the subspace projection. To comprehensively compare the proposed algorithm with several existing anomaly detection techniques, we constructed artificial datasets with various high-dimensional settings and found the algorithm displayed superior accuracy. A further experiment on an industrial dataset demonstrated the applicability of the proposed algorithm in fault detection tasks and highlighted another of its merits, namely, to provide preliminary interpretation of abnormality through feature ordering in relevant subspaces. - Highlights: • An anomaly detection approach for high-dimensional reliability data is proposed. • The approach selects relevant subspaces by assessing vectorial angles. • The novel ABSAD approach displays superior accuracy over other alternatives. • Numerical illustration approves its efficacy in fault detection applications

  5. High-accuracy optical extensometer based on coordinate transform in two-dimensional digital image correlation

    Science.gov (United States)

    Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan

    2018-01-01

    In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.

  6. Engineering two-photon high-dimensional states through quantum interference

    Science.gov (United States)

    Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew

    2016-01-01

    Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685

  7. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  8. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  9. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  10. Carbon doped GaAs/AlGaAs heterostructures with high mobility two dimensional hole gas

    Energy Technology Data Exchange (ETDEWEB)

    Hirmer, Marika; Bougeard, Dominique; Schuh, Dieter [Institut fuer Experimentelle und Angewandte Physik, Universitaet Regensburg, D 93040 Regensburg (Germany); Wegscheider, Werner [Laboratorium fuer Festkoerperphysik, ETH Zuerich, 8093 Zuerich (Switzerland)

    2011-07-01

    Two dimensional hole gases (2DHG) with high carrier mobilities are required for both fundamental research and possible future ultrafast spintronic devices. Here, two different types of GaAs/AlGaAs heterostructures hosting a 2DHG were investigated. The first structure is a GaAs QW embedded in AlGaAs barrier grown by molecular beam epitaxy with carbon-doping only at one side of the quantum well (QW) (single side doped, ssd), while the second structure is similar but with symmetrically arranged doping layers on both sides of the QW (double side doped, dsd). The ssd-structure shows hole mobilities up to 1.2*10{sup 6} cm{sup 2}/Vs which are achieved after illumination. In contrast, the dsd-structure hosts a 2DHG with mobility up to 2.05*10{sup 6} cm{sup 2}/Vs. Here, carrier mobility and carrier density is not affected by illuminating the sample. Both samples showed distinct Shubnikov-de-Haas oscillations and fractional quantum-Hall-plateaus in magnetotransport experiments done at 20mK, indicating the high quality of the material. In addition, the influence of different temperature profiles during growth and the influence of the Al content of the barrier Al{sub x}Ga{sub 1-x}As on carrier concentration and mobility were investigated and are presented here.

  11. A two dimensional clinostat experiment for microalgae cultures - basic work for bio- regenerativ life support systems

    Science.gov (United States)

    Harting, Benjamin; Slenzka, Klaus

    2012-07-01

    To investigate the influence of microgravity environments on photosynthetic organisms we designed a 2 dimensional clinostatexperiment for a suspended cell culture of Chlamydomonas reinhardtii. A novel approach of online measurments concerning relevant parameters important for the clasification of photosynthesis was obtained. To adress the photosynthesis rate we installed and validated an optical mesurement system to monitor the evolution and consumption of dissolved oxygen. Simultaneously a PAM sensor to analyse the flourescence quantum yield of the photochemical reaction was integarted. Thus it was possible to directly classify important parameters of the phototrophic metabolism during clinorotation. The experiment design including well suited light conditions and further biochemical analysis were directly performed for microalgal cell cultures. Changes in the photosynthetic efficiancy of phototrophic cyanobacteria has been observed during parabolic flight campaign but the cause is already not understood. Explenations could be the dependency of gravitaxis by intracellular ionconcentartion or the existance of mechanosensitive ionchannels for example associated in chloroplasts of Chlamydomonas reinhardtii. The purpuse of the microalgal clinostat are studies in a qasi microgravity environment for the process design of future bioregenerative life suport systems in spaceflight missions. First results has indicated the need for special nourishment of the cell culture during microgravity experiments. Further data will be presented during the assembly.

  12. Evaluation of viewing experiences induced by a curved three-dimensional display

    Science.gov (United States)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-10-01

    Despite an increased need for three-dimensional (3-D) functionality in curved displays, comparisons pertinent to human factors between curved and flat panel 3-D displays have rarely been tested. This study compared stereoscopic 3-D viewing experiences induced by a curved display with those of a flat panel display by evaluating subjective and objective measures. Twenty-four participants took part in the experiments and viewed 3-D content with two different displays (flat and curved 3-D display) within a counterbalanced and within-subject design. For the 30-min viewing condition, a paired t-test showed significantly reduced P300 amplitudes, which were caused by engagement rather than cognitive fatigue, in the curved 3-D viewing condition compared to the flat 3-D viewing condition at P3 and P4. No significant differences in P300 amplitudes were observed for 60-min viewing. Subjective ratings of realness and engagement were also significantly higher in the curved 3-D viewing condition than in the flat 3-D viewing condition for 30-min viewing. Our findings support that curved 3-D displays can be effective for enhancing engagement among viewers based on specific viewing times and environments.

  13. High accessible experimental information on CPD experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, M. [RIAM, Kyushu University, Kasuga, Fukuoka 816-8580 (Japan)], E-mail: hasegawa@triam.kyushu-u.ac.jp; Nakamura, K.; Higashijima, A.; Kawasaki, S.; Nakashima, H.; Sato, K.N.; Zushi, H.; Hanada, K.; Sakamoto, M.; Idei, H. [RIAM, Kyushu University, Kasuga, Fukuoka 816-8580 (Japan)

    2008-04-15

    On CPD [1] (Compact PWI experimental Device) experiment, information of electronic logbook and sequence status are distributed by Web services to prepare future experimental environment such as steady state operation and remote participation. Hence, all the researchers can acquire information with a Web browser installed on a personal computer if they are connected to the Internet. However, to carry a notebook computer all the time is a burden to researchers. Furthermore, the researchers may not be always connected to the Internet. Mobile phones are superior in portability compared to notebook computers, and are easy to connect with Internet through the wireless network of the telecom carriers. Moreover, since recent mobile phones have full browsing function, their affinities to the Web services are becoming high. On this account, Web services for mobile phones are developed to access experimental information. For sequence monitoring, a mobile application MIDlet that utilizes special functions of mobile phone such as sound and vibration is also developed to draw attentions of researchers to sequence status.

  14. High accessible experimental information on CPD experiment

    International Nuclear Information System (INIS)

    Hasegawa, M.; Nakamura, K.; Higashijima, A.; Kawasaki, S.; Nakashima, H.; Sato, K.N.; Zushi, H.; Hanada, K.; Sakamoto, M.; Idei, H.

    2008-01-01

    On CPD [1] (Compact PWI experimental Device) experiment, information of electronic logbook and sequence status are distributed by Web services to prepare future experimental environment such as steady state operation and remote participation. Hence, all the researchers can acquire information with a Web browser installed on a personal computer if they are connected to the Internet. However, to carry a notebook computer all the time is a burden to researchers. Furthermore, the researchers may not be always connected to the Internet. Mobile phones are superior in portability compared to notebook computers, and are easy to connect with Internet through the wireless network of the telecom carriers. Moreover, since recent mobile phones have full browsing function, their affinities to the Web services are becoming high. On this account, Web services for mobile phones are developed to access experimental information. For sequence monitoring, a mobile application MIDlet that utilizes special functions of mobile phone such as sound and vibration is also developed to draw attentions of researchers to sequence status

  15. Metallic and highly conducting two-dimensional atomic arrays of sulfur enabled by molybdenum disulfide nanotemplate

    Science.gov (United States)

    Zhu, Shuze; Geng, Xiumei; Han, Yang; Benamara, Mourad; Chen, Liao; Li, Jingxiao; Bilgin, Ismail; Zhu, Hongli

    2017-10-01

    Element sulfur in nature is an insulating solid. While it has been tested that one-dimensional sulfur chain is metallic and conducting, the investigation on two-dimensional sulfur remains elusive. We report that molybdenum disulfide layers are able to serve as the nanotemplate to facilitate the formation of two-dimensional sulfur. Density functional theory calculations suggest that confined in-between layers of molybdenum disulfide, sulfur atoms are able to form two-dimensional triangular arrays that are highly metallic. As a result, these arrays contribute to the high conductivity and metallic phase of the hybrid structures of molybdenum disulfide layers and two-dimensional sulfur arrays. The experimentally measured conductivity of such hybrid structures reaches up to 223 S/m. Multiple experimental results, including X-ray photoelectron spectroscopy (XPS), transition electron microscope (TEM), selected area electron diffraction (SAED), agree with the computational insights. Due to the excellent conductivity, the current density is linearly proportional to the scan rate until 30,000 mV s-1 without the attendance of conductive additives. Using such hybrid structures as electrode, the two-electrode supercapacitor cells yield a power density of 106 Wh kg-1 and energy density 47.5 Wh kg-1 in ionic liquid electrolytes. Our findings offer new insights into using two-dimensional materials and their Van der Waals heterostructures as nanotemplates to pattern foreign atoms for unprecedented material properties.

  16. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  17. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-12-01

    Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  18. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    Science.gov (United States)

    Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2013-12-01

    Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  19. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  20. Bayesian Inference of High-Dimensional Dynamical Ocean Models

    Science.gov (United States)

    Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.

    2015-12-01

    This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.

  1. Global communication schemes for the numerical solution of high-dimensional PDEs

    DEFF Research Database (Denmark)

    Hupp, Philipp; Heene, Mario; Jacob, Riko

    2016-01-01

    The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...

  2. Nuclear science experiments in high schools

    International Nuclear Information System (INIS)

    Lowenthal, G.C.

    1990-01-01

    This paper comments on the importance of nuclear science experiments and demonstrations to science education in secondary schools. It claims that radiation protection is incompletly realised unless supported by some knowledge about ionizing radiations. The negative influence of the NHMRC Code of Practice on school experiments involving ionizing radiation is also outlined. The authors offer some suggestions for a new edition of the Code with a positive approach to nuclear science experiments in schools. 7 refs., 4 figs

  3. Operating experience with the DRAGON High Temperature Reactor experiment

    International Nuclear Information System (INIS)

    Simon, R.A.; Capp, P.D.

    2002-01-01

    The Dragon Reactor Experiment in Winfrith/UK was a materials test facility for a number of HTR projects pursued in the sixties and seventies of the last century. It was built and managed as an OECD/NEA international joint undertaking. The reactor operated successfully between 1964 and 1975 to satisfy the growing demand for irradiation testing of fuels and fuel elements as well as for technological tests of components and materials. The paper describes the reactor's main experimental features and presents results of 11 years of reactor operation relevant for future HTRs. (author)

  4. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  5. Simplification of coding of NRU loop experiment software with dimensional generator

    International Nuclear Information System (INIS)

    Davis, R. S.

    2006-01-01

    The following are specific topics of this paper: 1.There is much creativity in the manner in which Dimensional Generator can be applied to a specific programming task [2]. This paper tells how Dimensional Generator was applied to a reactor-physics task. 2. In this first practical use, Dimensional Generator itself proved not to need change, but a better user interface was found necessary, essentially because the relevance of Dimensional Generator to reactor physics was initially underestimated. It is briefly described. 3. The use of Dimensional Generator helps make reactor-physics source code somewhat simpler. That is explained here with brief examples from BURFEL-PC and WIMSBURF. 4. Most importantly, with the help of Dimensional Generator, all erroneous physical expressions were automatically detected. The errors are detailed here (in spite of the author's embarrassment) because they show clearly, both in theory and in practice, how Dimensional Generator offers quality enhancement of reactor-physics programming. (authors)

  6. Model-based Clustering of High-Dimensional Data in Astrophysics

    Science.gov (United States)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  7. Linear stability theory as an early warning sign for transitions in high dimensional complex systems

    International Nuclear Information System (INIS)

    Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft

    2016-01-01

    We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)

  8. High-dimensional atom localization via spontaneously generated coherence in a microwave-driven atomic system.

    Science.gov (United States)

    Wang, Zhiping; Chen, Jinyu; Yu, Benli

    2017-02-20

    We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.

  9. Evaluation of one-dimensional and two-dimensional volatility basis sets in simulating the aging of secondary organic aerosol with smog-chamber experiments.

    Science.gov (United States)

    Zhao, Bin; Wang, Shuxiao; Donahue, Neil M; Chuang, Wayne; Hildebrandt Ruiz, Lea; Ng, Nga L; Wang, Yangjun; Hao, Jiming

    2015-02-17

    We evaluate the one-dimensional volatility basis set (1D-VBS) and two-dimensional volatility basis set (2D-VBS) in simulating the aging of SOA derived from toluene and α-pinene against smog-chamber experiments. If we simulate the first-generation products with empirical chamber fits and the subsequent aging chemistry with a 1D-VBS or a 2D-VBS, the models mostly overestimate the SOA concentrations in the toluene oxidation experiments. This is because the empirical chamber fits include both first-generation oxidation and aging; simulating aging in addition to this results in double counting of the initial aging effects. If the first-generation oxidation is treated explicitly, the base-case 2D-VBS underestimates the SOA concentrations and O:C increase of the toluene oxidation experiments; it generally underestimates the SOA concentrations and overestimates the O:C increase of the α-pinene experiments. With the first-generation oxidation treated explicitly, we could modify the 2D-VBS configuration individually for toluene and α-pinene to achieve good model-measurement agreement. However, we are unable to simulate the oxidation of both toluene and α-pinene with the same 2D-VBS configuration. We suggest that future models should implement parallel layers for anthropogenic (aromatic) and biogenic precursors, and that more modeling studies and laboratory research be done to optimize the "best-guess" parameters for each layer.

  10. Linking experiment and theory for three-dimensional networked binary metal nanoparticle–triblock terpolymer superstructures

    KAUST Repository

    Li, Zihui

    2014-02-21

    © 2014 Macmillan Publishers Limited. Controlling superstructure of binary nanoparticle mixtures in three dimensions from self-assembly opens enormous opportunities for the design of materials with unique properties. Here we report on how the intimate coupling of synthesis, in-depth electron tomographic characterization and theory enables exquisite control of superstructure in highly ordered porous three-dimensional continuous networks from single and binary mixtures of metal nanoparticles with a triblock terpolymer. Poly(isoprene-block-styrene-block-(N,N-dimethylamino)ethyl methacrylate) is synthesized and used as structure-directing agent for ligand-stabilized platinum and gold nanoparticles. Quantitative analysis provides insights into short-and long-range nanoparticle-nanoparticle correlations, and local and global contributions to structural chirality in the networks. Results provide synthesis criteria for next-generation mesoporous network superstructures from binary nanoparticle mixtures for potential applications in areas including catalysis.

  11. Gyrokinetic Vlasov code including full three-dimensional geometry of experiments

    International Nuclear Information System (INIS)

    Nunami, Masanori; Watanabe, Tomohiko; Sugama, Hideo

    2010-03-01

    A new gyrokinetic Vlasov simulation code, GKV-X, is developed for investigating the turbulent transport in magnetic confinement devices with non-axisymmetric configurations. Effects of the magnetic surface shapes in a three-dimensional equilibrium obtained from the VMEC code are accurately incorporated. Linear simulations of the ion temperature gradient instabilities and the zonal flows in the Large Helical Device (LHD) configuration are carried out by the GKV-X code for a benchmark test against the GKV code. The frequency, the growth rate, and the mode structure of the ion temperature gradient instability are influenced by the VMEC geometrical data such as the metric tensor components of the Boozer coordinates for high poloidal wave numbers, while the difference between the zonal flow responses obtained by the GKV and GKV-X codes is found to be small in the core LHD region. (author)

  12. Lecture note on circuit technology for high energy physics experiment

    International Nuclear Information System (INIS)

    Ikeda, Hirokazu.

    1992-07-01

    This lecture gives basic ideas and practice of the circuit technology for high energy physics experiment. The program of this lecture gives access to the integrated circuit technology to be applied for a high luminosity hadron collider experiment. (author)

  13. High-dimensional quantum key distribution based on multicore fiber using silicon photonic integrated circuits

    DEFF Research Database (Denmark)

    Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld

    2017-01-01

    is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...

  14. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    International Nuclear Information System (INIS)

    Hayashi, Y.; Hirose, Y.; Seno, Y.

    2016-01-01

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 "3 voxels was obtained.

  15. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Y., E-mail: y-hayashi@mosk.tytlabs.co.jp; Hirose, Y.; Seno, Y. [Toyota Central R& D Toyota Central R& D Labs., Inc., 41-1 Nagakute Aichi 480-1192 Japan (Japan)

    2016-07-27

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 {sup 3} voxels was obtained.

  16. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    Directory of Open Access Journals (Sweden)

    Thenmozhi Srinivasan

    2015-01-01

    Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.

  17. The validation and assessment of machine learning: a game of prediction from high-dimensional data

    DEFF Research Database (Denmark)

    Pers, Tune Hannes; Albrechtsen, A; Holst, C

    2009-01-01

    In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....

  18. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuxiao; Zhang, Jianming [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Liu, Yang, E-mail: yangl@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Huang, Hui [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Kang, Zhenhui, E-mail: zhkang@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. Black-Right-Pointing-Pointer MPCS was covalently modified by cysteine (MPCS-CO-Cys). Black-Right-Pointing-Pointer MPCS-CO-Cys was first time used in electrochemical detection of heavy metal ions. Black-Right-Pointing-Pointer Heavy metal ions such as Pb{sup 2+} and Cd{sup 2+} can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  19. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    International Nuclear Information System (INIS)

    Zhang, Yuxiao; Zhang, Jianming; Liu, Yang; Huang, Hui; Kang, Zhenhui

    2012-01-01

    Highlights: ► Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. ► MPCS was covalently modified by cysteine (MPCS–CO–Cys). ► MPCS–CO–Cys was first time used in electrochemical detection of heavy metal ions. ► Heavy metal ions such as Pb 2+ and Cd 2+ can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  20. Quantitative study of quasi-one-dimensional Bose gas experiments via the stochastic Gross-Pitaevskii equation

    International Nuclear Information System (INIS)

    Cockburn, S. P.; Gallucci, D.; Proukakis, N. P.

    2011-01-01

    The stochastic Gross-Pitaevskii equation is shown to be an excellent model for quasi-one-dimensional Bose gas experiments, accurately reproducing the in situ density profiles recently obtained in the experiments of Trebbia et al.[Phys. Rev. Lett. 97, 250403 (2006)] and van Amerongen et al.[Phys. Rev. Lett. 100, 090402 (2008)] and the density fluctuation data reported by Armijo et al.[Phys. Rev. Lett. 105, 230402 (2010)]. To facilitate such agreement, we propose and implement a quasi-one-dimensional extension to the one-dimensional stochastic Gross-Pitaevskii equation for the low-energy, axial modes, while atoms in excited transverse modes are treated as independent ideal Bose gases.

  1. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  2. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    Science.gov (United States)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  3. Usefulness and capability of three-dimensional, full high-definition movies for surgical education.

    Science.gov (United States)

    Takano, M; Kasahara, K; Sugahara, K; Watanabe, A; Yoshida, S; Shibahara, T

    2017-12-01

    Because of changing surgical procedures in the fields of oral and maxillofacial surgery, new methods for surgical education are needed and could include recent advances in digital technology. Many doctors have attempted to use digital technology as educational tools for surgical training, and movies have played an important role in these attempts. We have been using a 3D full high-definition (full-HD) camcorder to record movies of intra-oral surgeries. The subjects were medical students and doctors receiving surgical training who did not have actual surgical experience ( n  = 67). Participants watched an 8-min, 2D movie of orthognathic surgery and subsequently watched the 3D version. After watching the 3D movie, participants were asked to complete a questionnaire. A lot of participants (84%) felt a 3D movie excellent or good and answered that the advantages of a 3D movie were their appearance of solidity or realism. Almost all participants (99%) answered that 3D movies were quite useful or useful for medical practice. Three-dimensional full-HD movies have the potential to improve the quality of medical education and clinical practice in oral and maxillofacial surgery.

  4. Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.

    Science.gov (United States)

    Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen

    2017-12-01

    In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.

  5. Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids

    International Nuclear Information System (INIS)

    Jakeman, John D.; Archibald, Richard; Xiu Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  6. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca

    2013-01-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  7. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza

    2013-08-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  8. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  9. Parallel 4-dimensional cellular automaton track finder for the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Akishina, Valentina [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); JINR Joint Institute for Nuclear Research, Dubna (Russian Federation); Kisel, Ivan [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Collaboration: CBM-Collaboration

    2016-07-01

    The CBM experiment at FAIR will focus on the measurement of rare probes at interaction rates up to 10 MHz. The beam will provide free stream of particles, so that information about different collisions may overlap in time. It requires the full online event reconstruction not only in space, but also in time, so-called 4D (4-dimensional) event building. This is a task of the First-Level Event Selection (FLES) package. The FLES reconstruction package consists of several modules: track finding, track fitting, short-lived particles finding, event building and selection. The Silicon Tracking System (STS) time measurement information was included into the Cellular Automaton (CA) track finder algorithm. The 4D track finder algorithm speed (8.5 ms per event in a time-slice) and efficiency is comparable with the event-based analysis. The CA track finder was fully parallelised inside the time-slice. The parallel version achieves a speed-up factor of 10.6 while parallelising between 10 Intel Xeon physical cores with a hyper-threading. The first version of event building based on 4D track finder was implemented.

  10. Effective Rheology of Two-Phase Flow in Three-Dimensional Porous Media: Experiment and Simulation.

    Science.gov (United States)

    Sinha, Santanu; Bender, Andrew T; Danczyk, Matthew; Keepseagle, Kayla; Prather, Cody A; Bray, Joshua M; Thrane, Linn W; Seymour, Joseph D; Codd, Sarah L; Hansen, Alex

    2017-01-01

    We present an experimental and numerical study of immiscible two-phase flow of Newtonian fluids in three-dimensional (3D) porous media to find the relationship between the volumetric flow rate ( Q ) and the total pressure difference ([Formula: see text]) in the steady state. We show that in the regime where capillary forces compete with the viscous forces, the distribution of capillary barriers at the interfaces effectively creates a yield threshold ([Formula: see text]), making the fluids reminiscent of a Bingham viscoplastic fluid in the porous medium. In this regime, Q depends quadratically on an excess pressure drop ([Formula: see text]). While increasing the flow rate, there is a transition, beyond which the overall flow is Newtonian and the relationship is linear. In our experiments, we build a model porous medium using a column of glass beads transporting two fluids, deionized water and air. For the numerical study, reconstructed 3D pore networks from real core samples are considered and the transport of wetting and non-wetting fluids through the network is modeled by tracking the fluid interfaces with time. We find agreement between our numerical and experimental results. Our results match with the mean-field results reported earlier.

  11. SAMNet: a network-based approach to integrate multi-dimensional high throughput datasets.

    Science.gov (United States)

    Gosline, Sara J C; Spencer, Sarah J; Ursu, Oana; Fraenkel, Ernest

    2012-11-01

    The rapid development of high throughput biotechnologies has led to an onslaught of data describing genetic perturbations and changes in mRNA and protein levels in the cell. Because each assay provides a one-dimensional snapshot of active signaling pathways, it has become desirable to perform multiple assays (e.g. mRNA expression and phospho-proteomics) to measure a single condition. However, as experiments expand to accommodate various cellular conditions, proper analysis and interpretation of these data have become more challenging. Here we introduce a novel approach called SAMNet, for Simultaneous Analysis of Multiple Networks, that is able to interpret diverse assays over multiple perturbations. The algorithm uses a constrained optimization approach to integrate mRNA expression data with upstream genes, selecting edges in the protein-protein interaction network that best explain the changes across all perturbations. The result is a putative set of protein interactions that succinctly summarizes the results from all experiments, highlighting the network elements unique to each perturbation. We evaluated SAMNet in both yeast and human datasets. The yeast dataset measured the cellular response to seven different transition metals, and the human dataset measured cellular changes in four different lung cancer models of Epithelial-Mesenchymal Transition (EMT), a crucial process in tumor metastasis. SAMNet was able to identify canonical yeast metal-processing genes unique to each commodity in the yeast dataset, as well as human genes such as β-catenin and TCF7L2/TCF4 that are required for EMT signaling but escaped detection in the mRNA and phospho-proteomic data. Moreover, SAMNet also highlighted drugs likely to modulate EMT, identifying a series of less canonical genes known to be affected by the BCR-ABL inhibitor imatinib (Gleevec), suggesting a possible influence of this drug on EMT.

  12. Three-dimensional coupled kinetics/thermal- hydraulic benchmark TRIGA experiments

    International Nuclear Information System (INIS)

    Feltus, Madeline Anne; Miller, William Scott

    2000-01-01

    This research project provides separate effects tests in order to benchmark neutron kinetics models coupled with thermal-hydraulic (T/H) models used in best-estimate codes such as the Nuclear Regulatory Commission's (NRC) RELAP and TRAC code series and industrial codes such as RETRAN. Before this research project was initiated, no adequate experimental data existed for reactivity initiated transients that could be used to assess coupled three-dimensional (3D) kinetics and 3D T/H codes which have been, or are being developed around the world. Using various Test Reactor Isotope General Atomic (TRIGA) reactor core configurations at the Penn State Breazeale Reactor (PSBR), it is possible to determine the level of neutronics modeling required to describe kinetics and T/H feedback interactions. This research demonstrates that the small compact PSBR TRIGA core does not necessarily behave as a point kinetics reactor, but that this TRIGA can provide actual test results for 3D kinetics code benchmark efforts. This research focused on developing in-reactor tests that exhibited 3D neutronics effects coupled with 3D T/H feedback. A variety of pulses were used to evaluate the level of kinetics modeling needed for prompt temperature feedback in the fuel. Ramps and square waves were used to evaluate the detail of modeling needed for the delayed T/H feedback of the coolant. A stepped ramp was performed to evaluate and verify the derived thermal constants for the specific PSBR TRIGA core loading pattern. As part of the analytical benchmark research, the STAR 3D kinetics code (, STAR: Space and time analysis of reactors, Version 5, Level 3, Users Guide, Yankee Atomic Electric Company, YEAC 1758, Bolton, MA) was used to model the transient experiments. The STAR models were coupled with the one-dimensional (1D) WIGL and LRA and 3D COBRA (, COBRA IIIC: A digital computer program for steady-state and transient thermal-hydraulic analysis of rod bundle nuclear fuel elements, Battelle

  13. Strategies and Experiences Using High Performance Fortran

    National Research Council Canada - National Science Library

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  14. Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement

    Directory of Open Access Journals (Sweden)

    Xianglei Liu

    2018-01-01

    Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.

  15. An irregular grid approach for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2008-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting. The method is centered around the approximation of the associated complementarity problem on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  16. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  17. Reconstruction of high-dimensional states entangled in orbital angular momentum using mutually unbiased measurements

    CSIR Research Space (South Africa)

    Giovannini, D

    2013-06-01

    Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...

  18. Three-dimensionality of field-induced magnetism in a high-temperature superconductor

    DEFF Research Database (Denmark)

    Lake, B.; Lefmann, K.; Christensen, N.B.

    2005-01-01

    Many physical properties of high-temperature superconductors are two-dimensional phenomena derived from their square-planar CuO(2) building blocks. This is especially true of the magnetism from the copper ions. As mobile charge carriers enter the CuO(2) layers, the antiferromagnetism of the parent...

  19. Finding and Visualizing Relevant Subspaces for Clustering High-Dimensional Astronomical Data Using Connected Morphological Operators

    NARCIS (Netherlands)

    Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.

    2010-01-01

    Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.

  20. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    Science.gov (United States)

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  1. Estimating the effect of a variable in a high-dimensional regression model

    DEFF Research Database (Denmark)

    Jensen, Peter Sandholt; Wurtz, Allan

    assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider  properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...

  2. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  3. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  4. Using Localised Quadratic Functions on an Irregular Grid for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose a method for pricing high-dimensional American options on an irregular grid; the method involves using quadratic functions to approximate the local effect of the Black-Scholes operator.Once such an approximation is known, one can solve the pricing problem by time stepping in an explicit

  5. An Irregular Grid Approach for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting.The method is centred around the approximation of the associated complementarity problem on an irregular grid.We approximate the partial differential operator on this grid by appealing to the SDE

  6. Pricing and hedging high-dimensional American options : an irregular grid approach

    NARCIS (Netherlands)

    Berridge, S.; Schumacher, H.

    2002-01-01

    We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  7. Disruption simulation experiment using high-frequency rastering electron beam as the heat source

    International Nuclear Information System (INIS)

    Yamazaki, S.; Seki, M.

    1987-01-01

    The disruption is a serious event which possibly reduces the lifetime of plasm interactive components, so the effects of the resulting high heat flux on the wall materials must be clearly identified. The authors performed disruption simulation experiments to investigate melting, evaporation, and crack initiation behaviors using an electron beam facility as the heat source. The facility was improved with a high-frequency beam rastering system which provided spatially and temporally uniform heat flux on wider test surfaces. Along with the experiments, thermal and mechanical analyses were also performed. A two-dimensional disruption thermal analysis code (DREAM) was developed for the analyses

  8. A high energy gamma ray astronomy experiment

    International Nuclear Information System (INIS)

    Hofstadter, R.

    1988-01-01

    The author describes work involving NASA's Gamma Ray Observatory (GRO). GRO exemplifies the near zero principle because it investigates new gamma ray phenomena by relying on the space program to take us into the region of zero interference above the earth's atmosphere. In its present form GRO has four experiments

  9. Water Intake by Soil, Experiments for High School Students.

    Science.gov (United States)

    1969

    Presented are a variety of surface run-off experiments for high school students. The experiments are analogies to basic concepts about water intake, as related to water delivery, soil properties and management, floods, and conservation measures. The materials needed to perform the experiments are easily obtainable. The experiments are followed by…

  10. Zero- and two-dimensional hybrid carbon phosphors for high colorimetric purity white light-emission.

    Science.gov (United States)

    Ding, Yamei; Chang, Qing; Xiu, Fei; Chen, Yingying; Liu, Zhengdong; Ban, Chaoyi; Cheng, Shuai; Liu, Juqing; Huang, Wei

    2018-03-01

    Carbon nanomaterials are promising phosphors for white light emission. A facile single-step synthesis method has been developed to prepare zero- and two-dimensional hybrid carbon phosphors for the first time. Zero-dimensional carbon dots (C-dots) emit bright blue luminescence under 365 nm UV light and two-dimensional nanoplates improve the dispersity and film forming ability of C-dots. As a proof-of-concept application, the as-prepared hybrid carbon phosphors emit bright white luminescence in the solid state, and the phosphor-coated blue LEDs exhibit high colorimetric purity white light-emission with a color coordinate of (0.3308, 0.3312), potentially enabling the successful application of white emitting phosphors in the LED field.

  11. Chemical shift-dependent apparent scalar couplings: An alternative concept of chemical shift monitoring in multi-dimensional NMR experiments

    International Nuclear Information System (INIS)

    Kwiatkowski, Witek; Riek, Roland

    2003-01-01

    The paper presents an alternative technique for chemical shift monitoring in a multi-dimensional NMR experiment. The monitored chemical shift is coded in the line-shape of a cross-peak through an apparent residual scalar coupling active during an established evolution period or acquisition. The size of the apparent scalar coupling is manipulated with an off-resonance radio-frequency pulse in order to correlate the size of the coupling with the position of the additional chemical shift. The strength of this concept is that chemical shift information is added without an additional evolution period and accompanying polarization transfer periods. This concept was incorporated into the three-dimensional triple-resonance experiment HNCA, adding the information of 1 H α chemical shifts. The experiment is called HNCA coded HA, since the chemical shift of 1 H α is coded in the line-shape of the cross-peak along the 13 C α dimension

  12. An adaptive optimal ensemble classifier via bagging and rank aggregation with applications to high dimensional data

    Directory of Open Access Journals (Sweden)

    Datta Susmita

    2010-08-01

    Full Text Available Abstract Background Generally speaking, different classifiers tend to work well for certain types of data and conversely, it is usually not known a priori which algorithm will be optimal in any given classification application. In addition, for most classification problems, selecting the best performing classification algorithm amongst a number of competing algorithms is a difficult task for various reasons. As for example, the order of performance may depend on the performance measure employed for such a comparison. In this work, we present a novel adaptive ensemble classifier constructed by combining bagging and rank aggregation that is capable of adaptively changing its performance depending on the type of data that is being classified. The attractive feature of the proposed classifier is its multi-objective nature where the classification results can be simultaneously optimized with respect to several performance measures, for example, accuracy, sensitivity and specificity. We also show that our somewhat complex strategy has better predictive performance as judged on test samples than a more naive approach that attempts to directly identify the optimal classifier based on the training data performances of the individual classifiers. Results We illustrate the proposed method with two simulated and two real-data examples. In all cases, the ensemble classifier performs at the level of the best individual classifier comprising the ensemble or better. Conclusions For complex high-dimensional datasets resulting from present day high-throughput experiments, it may be wise to consider a number of classification algorithms combined with dimension reduction techniques rather than a fixed standard algorithm set a priori.

  13. High energy diffraction processes - TOTEM experiment

    CERN Document Server

    Kaspar, Jan

    2005-01-01

    We study two problems in this thesis. First, we analyse a model for pp and anti-pp elastic scattering. The model was developed by M.M.Islam and coworkers in the past 25 years. Our aim was to make a prediction for differential cross section of pp scattering at energy of 14 TeV which will be measured by the TOTEM experiment at the LHC at CERN. Since protons carry electromagnetic charge, we had to take into account an electromagnetic interaction and effects of the interference between electromagnetic and hadronic forces. We also analysed the model in the impact parameter representation. It enabled us to gain information about range of hadronic forces responsible for elastic, inelastic and total pp and anti-pp scattering. In the second part we present our alignment method for detectors inside the Roman pots of the TOTEM experiment. The method was used during Roman Pot tests on the SPS beam last year.

  14. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    Science.gov (United States)

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  15. Thermal Investigation of Three-Dimensional GaN-on-SiC High Electron Mobility Transistors

    Science.gov (United States)

    2017-07-01

    University of L’Aquila, (2011). 23 Rao, H. & Bosman, G. Hot-electron induced defect generation in AlGaN/GaN high electron mobility transistors. Solid...AFRL-RY-WP-TR-2017-0143 THERMAL INVESTIGATION OF THREE- DIMENSIONAL GaN-on-SiC HIGH ELECTRON MOBILITY TRANSISTORS Qing Hao The University of Arizona...clarification memorandum dated 16 Jan 09. This report is available to the general public, including foreign nationals. Copies may be obtained from the

  16. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    Science.gov (United States)

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  17. Computational Search for Two-Dimensional MX2 Semiconductors with Possible High Electron Mobility at Room Temperature

    Directory of Open Access Journals (Sweden)

    Zhishuo Huang

    2016-08-01

    Full Text Available Neither of the two typical two-dimensional materials, graphene and single layer MoS 2 , are good enough for developing semiconductor logical devices. We calculated the electron mobility of 14 two-dimensional semiconductors with composition of MX 2 , where M (=Mo, W, Sn, Hf, Zr and Pt are transition metals, and Xs are S, Se and Te. We approximated the electron phonon scattering matrix by deformation potentials, within which long wave longitudinal acoustical and optical phonon scatterings were included. Piezoelectric scattering in the compounds without inversion symmetry is also taken into account. We found that out of the 14 compounds, WS 2 , PtS 2 and PtSe 2 are promising for logical devices regarding the possible high electron mobility and finite band gap. Especially, the phonon limited electron mobility in PtSe 2 reaches about 4000 cm 2 ·V - 1 ·s - 1 at room temperature, which is the highest among the compounds with an indirect bandgap of about 1.25 eV under the local density approximation. Our results can be the first guide for experiments to synthesize better two-dimensional materials for future semiconductor devices.

  18. Simulation-Driven Development and Optimization of a High-Performance Six-Dimensional Wrist Force/Torque Sensor

    Directory of Open Access Journals (Sweden)

    Qiaokang LIANG

    2010-05-01

    Full Text Available This paper describes the Simulation-Driven Development and Optimization (SDDO of a six-dimensional force/torque sensor with high performance. By the implementation of the SDDO, the developed sensor possesses high performance such as high sensitivity, linearity, stiffness and repeatability simultaneously, which is hard for tranditional force/torque sensor. Integrated approach provided by software ANSYS was used to streamline and speed up the process chain and thereby to deliver results significantly faster than traditional approaches. The result of calibration experiment possesses some impressive characters, therefore the developed fore/torque sensor can be usefully used in industry and the methods of design can also be used to develop industrial product.

  19. Generalized reduced rank latent factor regression for high dimensional tensor fields, and neuroimaging-genetic applications.

    Science.gov (United States)

    Tao, Chenyang; Nichols, Thomas E; Hua, Xue; Ching, Christopher R K; Rolls, Edmund T; Thompson, Paul M; Feng, Jianfeng

    2017-01-01

    We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. Copyright © 2016. Published by Elsevier Inc.

  20. Pd nanoparticles supported on three-dimensional graphene aerogels as highly efficient catalysts for methanol electrooxidation

    International Nuclear Information System (INIS)

    Liu, Mingrui; Peng, Cheng; Yang, Wenke; Guo, Jiaojiao; Zheng, Yixiong; Chen, Peiqin; Huang, Tingting; Xu, Jing

    2015-01-01

    Well-dispersed Pd nanoparticles supported on three-dimensional graphene aerogels (Pd/3DGA) were successfully prepared via a facile and efficient hydrothermal method without surfactant and template. The morphology and structure of the as-prepared Pd/3DGA nanocomposites were investigated by scanning electron microscopy (SEM) and X-ray diffraction (XRD). SEM showed that the Pd nanoparticles with a small average diameter and narrow size distribution were uniformly deposited on the surface of the self-assembled three-dimensional graphene aerogels. Raman spectra revealed the surface properties of 3DGA and its interaction with metallic nanoparticles. Cyclic voltammetric (CV) and chronoamperometric (CA) experiments further exhibited its superior catalytic activity and stability for the electro-oxidation of methanol in alkaline media, making it a promising anodic catalyst for direct alkaline alcohol fuel cells (DAAFCs).

  1. Experiments on very high energy heavy ions

    International Nuclear Information System (INIS)

    Willis, W.J.

    1981-01-01

    In this paper I describe experimental techniques which could be used to investigate central collision of very high energy heavy ions. For my purposes, the energy range is defined by the number of pions produced, Nsub(π) >> 100, and consequently Nsub(π) >> Nsub(nucleon). In this regime we may expect that new phenomena will appear. (orig.)

  2. Dissecting high-dimensional phenotypes with bayesian sparse factor analysis of genetic covariance matrices.

    Science.gov (United States)

    Runcie, Daniel E; Mukherjee, Sayan

    2013-07-01

    Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.

  3. Three-dimensional true FISP for high-resolution imaging of the whole brain

    International Nuclear Information System (INIS)

    Schmitz, B.; Hagen, T.; Reith, W.

    2003-01-01

    While high-resolution T1-weighted sequences, such as three-dimensional magnetization-prepared rapid gradient-echo imaging, are widely available, there is a lack of an equivalent fast high-resolution sequence providing T2 contrast. Using fast high-performance gradient systems we show the feasibility of three-dimensional true fast imaging with steady-state precession (FISP) to fill this gap. We applied a three-dimensional true-FISP protocol with voxel sizes down to 0.5 x 0.5 x 0.5 mm and acquisition times of approximately 8 min on a 1.5-T Sonata (Siemens, Erlangen, Germany) magnetic resonance scanner. The sequence was included into routine brain imaging protocols for patients with cerebrospinal-fluid-related intracranial pathology. Images from 20 patients and 20 healthy volunteers were evaluated by two neuroradiologists with respect to diagnostic image quality and artifacts. All true-FISP scans showed excellent imaging quality free of artifacts in patients and volunteers. They were valuable for the assessment of anatomical and pathologic aspects of the included patients. High-resolution true-FISP imaging is a valuable adjunct for the exploration and neuronavigation of intracranial pathologies especially if cerebrospinal fluid is involved. (orig.)

  4. Two-dimensional magnetic field evolution measurements and plasma flow speed estimates from the coaxial thruster experiment

    International Nuclear Information System (INIS)

    Black, D.C.; Mayo, R.M.; Gerwin, R.A.; Schoenberg, K.F.; Scheuer, J.T.; Hoyt, R.P.; Henins, I.

    1994-01-01

    Local, time-dependent magnetic field measurements have been made in the Los Alamos coaxial thruster experiment (CTX) [C. W. Barnes et al., Phys. Fluids B 2, 1871 (1990); J. C. Fernandez et al., Nucl. Fusion 28, 1555 (1988)] using a 24 coil magnetic probe array (eight spatial positions, three axis probes). The CTX is a magnetized, coaxial plasma gun presently being used to investigate the viability of high pulsed power plasma thrusters for advanced electric propulsion. Previous efforts on this device have indicated that high pulsed power plasma guns are attractive candidates for advanced propulsion that employ ideal magnetohydrodynamic (MHD) plasma stream flow through self-formed magnetic nozzles. Indirect evidence of magnetic nozzle formation was obtained from plasma gun performance and measurements of directed axial velocities up to v z ∼10 7 cm/s. The purpose of this work is to make direct measurement of the time evolving magnetic field topology. The intent is to both identify that applied magnetic field distortion by the highly conductive plasma is occurring, and to provide insight into the details of discharge evolution. Data from a magnetic fluctuation probe array have been used to investigate the details of applied magnetic field deformation through the reconstruction of time-dependent flux profiles. Experimentally observed magnetic field line distortion has been compared to that predicted by a simple one-dimensional (1-D) model of the discharge channel. Such a comparison is utilized to estimate the axial plasma velocity in the thruster. Velocities determined in this manner are in approximate agreement with the predicted self-field magnetosonic speed and those measured by a time-of-flight spectrometer

  5. A three-dimensional stratigraphic model for aggrading submarine channels based on laboratory experiments, numerical modeling, and sediment cores

    Science.gov (United States)

    Limaye, A. B.; Komatsu, Y.; Suzuki, K.; Paola, C.

    2017-12-01

    Turbidity currents deliver clastic sediment from continental margins to the deep ocean, and are the main driver of landscape and stratigraphic evolution in many low-relief, submarine environments. The sedimentary architecture of turbidites—including the spatial organization of coarse and fine sediments—is closely related to the aggradation, scour, and lateral shifting of channels. Seismic stratigraphy indicates that submarine, meandering channels often aggrade rapidly relative to lateral shifting, and develop channel sand bodies with high vertical connectivity. In comparison, the stratigraphic architecture developed by submarine, braided is relatively uncertain. We present a new stratigraphic model for submarine braided channels that integrates predictions from laboratory experiments and flow modeling with constraints from sediment cores. In the laboratory experiments, a saline density current developed subaqueous channels in plastic sediment. The channels aggraded to form a deposit with a vertical scale of approximately five channel depths. We collected topography data during aggradation to (1) establish relative stratigraphic age, and (2) estimate the sorting patterns of a hypothetical grain size distribution. We applied a numerical flow model to each topographic surface and used modeled flow depth as a proxy for relative grain size. We then conditioned the resulting stratigraphic model to observed grain size distributions using sediment core data from the Nankai Trough, offshore Japan. Using this stratigraphic model, we establish new, quantitative predictions for the two- and three-dimensional connectivity of coarse sediment as a function of fine-sediment fraction. Using this case study as an example, we will highlight outstanding challenges in relating the evolution of low-relief landscapes to the stratigraphic record.

  6. A Dissimilarity Measure for Clustering High- and Infinite Dimensional Data that Satisfies the Triangle Inequality

    Science.gov (United States)

    Socolovsky, Eduardo A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The cosine or correlation measures of similarity used to cluster high dimensional data are interpreted as projections, and the orthogonal components are used to define a complementary dissimilarity measure to form a similarity-dissimilarity measure pair. Using a geometrical approach, a number of properties of this pair is established. This approach is also extended to general inner-product spaces of any dimension. These properties include the triangle inequality for the defined dissimilarity measure, error estimates for the triangle inequality and bounds on both measures that can be obtained with a few floating-point operations from previously computed values of the measures. The bounds and error estimates for the similarity and dissimilarity measures can be used to reduce the computational complexity of clustering algorithms and enhance their scalability, and the triangle inequality allows the design of clustering algorithms for high dimensional distributed data.

  7. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    Directory of Open Access Journals (Sweden)

    András Király

    2014-01-01

    Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.

  8. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    Science.gov (United States)

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. One- and two-dimensional sublattices as preconditions for high-Tc superconductivity

    International Nuclear Information System (INIS)

    Krueger, E.

    1989-01-01

    In an earlier paper it was proposed describing superconductivity in the framework of a nonadiabatic Heisenberg model in order to interprete the outstanding symmetry proper ties of the (spin-dependent) Wannier functions in the conduction bands of superconductors. This new group-theoretical model suggests that Cooper pair formation can only be mediated by boson excitations carrying crystal-spin-angular momentum. While in the three-dimensionally isotropic lattices of the standard superconductors phonons are able to transport crystal-spin-angular momentum, this is not true for phonons propagating through the one- or two-dimensional Cu-O sublattices of the high-T c compounds. Therefore, if such an anisotropic material is superconducting, it is necessarily higher-energetic excitations (of well-defined symmetry) which mediate pair formation. This fact is proposed being responsible for the high transition temperatures of these compounds. (author)

  10. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  11. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    Science.gov (United States)

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  12. High resolution SETI: Experiences and prospects

    Science.gov (United States)

    Horowitz, Paul; Clubok, Ken

    Megachannel spectroscopy with sub-Hertz resolution constitutes an attractive strategy for a microwave search for extraterrestrial intelligence (SETI), assuming the transmission of a narrowband radiofrequency beacon. Such resolution matches the properties of the interstellar medium, and the necessary Doppler corrections provide a high degree of interference rejection. We have constructed a frequency-agile receiver with an FFT-based 8 megachannel digital spectrum analyzer, on-line signal recognition, and multithreshold archiving. We are using it to conduct a meridian transit search of the northern sky at the Harvard-Smithsonian 26-m antenna, with a second identical system scheduled to begin observations in Argentina this month. Successive 400 kHz spectra, at 0.05 Hz resolution, are searched for features characteristic of an intentional narrowband beacon transmission. These spectra are centered on guessable frequencies (such as λ21 cm), referenced successively to the local standard of rest, the galactic barycenter, and the cosmic blackbody rest frame. This search has rejected interference admirably, but is greatly limited both in total frequency coverage and sensitivity to signals other than carriers. We summarize five years of high resolution SETI at Harvard, in the context of answering the questions "How useful is narrowband SETI, how serious are its limitations, what can be done to circumvent them, and in what direction should SETI evolve?" Increasingly powerful signal processing hardware, combined with ever-higher memory densities, are particularly relevant, permitting the construction of compact and affordable gigachannel spectrum analyzers covering hundreds of megahertz of instantaneous bandwidth.

  13. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Tétreault, Nicolas

    2011-11-09

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  14. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Té treault, Nicolas; Arsenault, É ric; Heiniger, Leo-Philipp; Soheilnia, Navid; Brillet, Jé ré mie; Moehl, Thomas; Zakeeruddin, Shaik; Ozin, Geoffrey A.; Grä tzel, Michael

    2011-01-01

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  15. Cooperative simulation of lithography and topography for three-dimensional high-aspect-ratio etching

    Science.gov (United States)

    Ichikawa, Takashi; Yagisawa, Takashi; Furukawa, Shinichi; Taguchi, Takafumi; Nojima, Shigeki; Murakami, Sadatoshi; Tamaoki, Naoki

    2018-06-01

    A topography simulation of high-aspect-ratio etching considering transports of ions and neutrals is performed, and the mechanism of reactive ion etching (RIE) residues in three-dimensional corner patterns is revealed. Limited ion flux and CF2 diffusion from the wide space of the corner is found to have an effect on the RIE residues. Cooperative simulation of lithography and topography is used to solve the RIE residue problem.

  16. Reduced, three-dimensional, nonlinear equations for high-β plasmas including toroidal effects

    International Nuclear Information System (INIS)

    Schmalz, R.

    1980-11-01

    The resistive MHD equations for toroidal plasma configurations are reduced by expanding to the second order in epsilon, the inverse aspect ratio, allowing for high β = μsub(o)p/B 2 of order epsilon. The result is a closed system of nonlinear, three-dimensional equations where the fast magnetohydrodynamic time scale is eliminated. In particular, the equation for the toroidal velocity remains decoupled. (orig.)

  17. Two and dimensional heat analysis inside a high pressure electrical discharge tube

    International Nuclear Information System (INIS)

    Aghanajafi, C.; Dehghani, A. R.; Fallah Abbasi, M.

    2005-01-01

    This article represents the heat transfer analysis for a horizontal high pressure mercury steam tube. To get a more realistic numerical simulation, heat radiation at different wavelength width bands, has been used besides convection and conduction heat transfer. The analysis for different gases with different pressure in two and three dimensional cases has been investigated and the results compared with empirical and semi empirical values. The effect of the environmental temperature on the arc tube temperature is also studied

  18. Controlling chaos in low and high dimensional systems with periodic parametric perturbations

    International Nuclear Information System (INIS)

    Mirus, K.A.; Sprott, J.C.

    1998-06-01

    The effect of applying a periodic perturbation to an accessible parameter of various chaotic systems is examined. Numerical results indicate that perturbation frequencies near the natural frequencies of the unstable periodic orbits of the chaotic systems can result in limit cycles for relatively small perturbations. Such perturbations can also control or significantly reduce the dimension of high-dimensional systems. Initial application to the control of fluctuations in a prototypical magnetic fusion plasma device will be reviewed

  19. GAMLSS for high-dimensional data – a flexible approach based on boosting

    OpenAIRE

    Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas; Schmid, Matthias

    2010-01-01

    Generalized additive models for location, scale and shape (GAMLSS) are a popular semi-parametric modelling approach that, in contrast to conventional GAMs, regress not only the expected mean but every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSS are infeasible for high-dimensional data setups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algo...

  20. Preface [HD3-2015: International meeting on high-dimensional data-driven science

    International Nuclear Information System (INIS)

    2016-01-01

    A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)

  1. Experiments on high efficiency aerosol filtration

    International Nuclear Information System (INIS)

    Mazzini, M.; Cuccuru, A.; Kunz, P.

    1977-01-01

    Research on high efficiency aerosol filtration by the Nuclear Engineering Institute of Pisa University and by CAMEN in collaboration with CNEN is outlined. HEPA filter efficiency was studied as a function of the type and size of the test aerosol, and as a function of flowrate (+-50% of the nominal value), air temperature (up to 70 0 C), relative humidity (up to 100%), and durability in a corrosive atmosphere (up to 140 hours in NaCl mist). In the selected experimental conditions these influences were appreciable but are not sufficient to be significant in industrial HEPA filter applications. Planned future research is outlined: measurement of the efficiency of two HEPA filters in series using a fixed particle size; dependence of the efficiency on air, temperatures up to 300-500 0 C; performance when subject to smoke from burning organic materials (natural rubber, neoprene, miscellaneous plastics). Such studies are relevant to possible accidental fires in a plutonium laboratory

  2. Review of high energy heavy ion experiments

    International Nuclear Information System (INIS)

    Miake, Yasuo

    2000-01-01

    It has been proposed that in high energy heavy ion collisions a physical conditions similar to the early stage of the Universe can be established in the laboratory. New phase of matter expected to be created is called the quark gluon plasma (QGP). Based on the motivation to create the QGP in the laboratory, heavy ion beams have been accelerated at AGS of Brookhaven National Laboratory and also at CERN-SPS. Several interesting features of the data have been reported, among which are: the suppression of J/ψ production in Pb+Pb collisions, the enhancement of low mass lepton pairs, and the collective behavior of hadron production. These features are reviewed under the key words of Deconfinement, Chiral Restoration and Collectivity in the lecture. (author)

  3. Application of radix sorting in high energy physics experiment

    International Nuclear Information System (INIS)

    Chen Xuan; Gu Minhao; Zhu Kejun

    2012-01-01

    In the high energy physics experiments, there are always requirements to sort the large scale of experiment data. To meet the demand, this paper introduces one radix sorting algorithms, whose sub-sort is counting sorting and time complex is O (n), based on the characteristic of high energy physics experiment data that is marked by time stamp. This paper gives the description, analysis, implementation and experimental result of the sorting algorithms. (authors)

  4. Towards a 3-dimensional atlas of the developing human embryo: The Amsterdam experience

    NARCIS (Netherlands)

    de Bakker, Bernadette S.; de Jong, Kees H.; Hagoort, Jaco; Oostra, Roelof-Jan; Moorman, Antoon F. M.

    2012-01-01

    Knowledge of complex morphogenetic processes that occur during embryonic development is essential for understanding anatomy and to get insight in the pathogenesis of congenital malformations. Understanding these processes can be facilitated by using a three-dimensional (3D) developmental series of

  5. Capturing the added value of three-dimensional television : viewing experience and naturalness of stereoscopic images

    NARCIS (Netherlands)

    Seuntiëns, P.J.H.; Heynderickx, I.E.J.; IJsselsteijn, W.A.

    2008-01-01

    The term "image quality" is often used to describe the performance of an imaging system. Recent research showed however that image quality may not be the most appropriate term to capture the evaluative processes associated with experiencing three-dimensional (3D) images. The added value of depth in

  6. Three-dimensional organization of the human interphase nucleus: Experiments compared to simulations.

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); C. Münkel (Christian); W. Waldeck (Waldemar); J. Langowski (Jörg)

    2000-01-01

    markdownabstractDespite the successful linear sequencing of the human genome its three-dimensional structure is widely unknown, although it is important for gene regulation and replication. For a long time the interphase nucleus has been viewed as a 'spaghetti soup' of DNA without much internal

  7. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Science.gov (United States)

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  8. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Directory of Open Access Journals (Sweden)

    Wei Ji Ma

    Full Text Available Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness, one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  9. The Yosemite Extreme Panoramic Imaging Project: Monitoring Rockfall in Yosemite Valley with High-Resolution, Three-Dimensional Imagery

    Science.gov (United States)

    Stock, G. M.; Hansen, E.; Downing, G.

    2008-12-01

    Yosemite Valley experiences numerous rockfalls each year, with over 600 rockfall events documented since 1850. However, monitoring rockfall activity has proved challenging without high-resolution "basemap" imagery of the Valley walls. The Yosemite Extreme Panoramic Imaging Project, a partnership between the National Park Service and xRez Studio, has created an unprecedented image of Yosemite Valley's walls by utilizing gigapixel panoramic photography, LiDAR-based digital terrain modeling, and three-dimensional computer rendering. Photographic capture was accomplished by 20 separate teams shooting from key overlapping locations throughout Yosemite Valley. The shots were taken simultaneously in order to ensure uniform lighting, with each team taking over 500 overlapping shots from each vantage point. Each team's shots were then assembled into 20 gigapixel panoramas. In addition, all 20 gigapixel panoramas were projected onto a 1 meter resolution digital terrain model in three-dimensional rendering software, unifying Yosemite Valley's walls into a vertical orthographic view. The resulting image reveals the geologic complexity of Yosemite Valley in high resolution and represents one of the world's largest photographic captures of a single area. Several rockfalls have already occurred since image capture, and repeat photography of these areas clearly delineates rockfall source areas and failure dynamics. Thus, the imagery has already proven to be a valuable tool for monitoring and understanding rockfall in Yosemite Valley. It also sets a new benchmark for the quality of information a photographic image, enabled with powerful new imaging technology, can provide for the earth sciences.

  10. High-definition resolution three-dimensional imaging systems in laparoscopic radical prostatectomy: randomized comparative study with high-definition resolution two-dimensional systems.

    Science.gov (United States)

    Kinoshita, Hidefumi; Nakagawa, Ken; Usui, Yukio; Iwamura, Masatsugu; Ito, Akihiro; Miyajima, Akira; Hoshi, Akio; Arai, Yoichi; Baba, Shiro; Matsuda, Tadashi

    2015-08-01

    Three-dimensional (3D) imaging systems have been introduced worldwide for surgical instrumentation. A difficulty of laparoscopic surgery involves converting two-dimensional (2D) images into 3D images and depth perception rearrangement. 3D imaging may remove the need for depth perception rearrangement and therefore have clinical benefits. We conducted a multicenter, open-label, randomized trial to compare the surgical outcome of 3D-high-definition (HD) resolution and 2D-HD imaging in laparoscopic radical prostatectomy (LRP), in order to determine whether an LRP under HD resolution 3D imaging is superior to that under HD resolution 2D imaging in perioperative outcome, feasibility, and fatigue. One-hundred twenty-two patients were randomly assigned to a 2D or 3D group. The primary outcome was time to perform vesicourethral anastomosis (VUA), which is technically demanding and may include a number of technical difficulties considered in laparoscopic surgeries. VUA time was not significantly shorter in the 3D group (26.7 min, mean) compared with the 2D group (30.1 min, mean) (p = 0.11, Student's t test). However, experienced surgeons and 3D-HD imaging were independent predictors for shorter VUA times (p = 0.000, p = 0.014, multivariate logistic regression analysis). Total pneumoperitoneum time was not different. No conversion case from 3D to 2D or LRP to open RP was observed. Fatigue was evaluated by a simulation sickness questionnaire and critical flicker frequency. Results were not different between the two groups. Subjective feasibility and satisfaction scores were significantly higher in the 3D group. Using a 3D imaging system in LRP may have only limited advantages in decreasing operation times over 2D imaging systems. However, the 3D system increased surgical feasibility and decreased surgeons' effort levels without inducing significant fatigue.

  11. Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2009-01-01

    Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold

  12. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    Science.gov (United States)

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  13. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  14. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  15. High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1984-01-01

    Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)

  16. Preparation of three-dimensional graphene foam for high performance supercapacitors

    Directory of Open Access Journals (Sweden)

    Yunjie Ping

    2017-04-01

    Full Text Available Supercapacitor is a new type of energy-storage device, and has been attracted widely attentions. As a two dimensional (2D nanomaterials, graphene is considered to be a promising material of supercapacitor because of its excellent properties involving high electrical conductivity and large surface area. In this paper, the large-scale graphene is successfully fabricated via environmental-friendly electrochemical exfoliation of graphite, and then, the three dimensional (3D graphene foam is prepared by using nickel foam as template and FeCl3/HCl solution as etchant. Compared with the regular 2D graphene paper, the 3D graphene foam electrode shows better electrochemical performance, and exhibits the largest specific capacitance of approximately 128 F/g at the current density of 1 A/g in 6 M KOH electrolyte. It is expected that the 3D graphene foam will have a potential application in the supercapacitors.

  17. Four-dimensional (4D) tracking of high-temperature microparticles

    International Nuclear Information System (INIS)

    Wang, Zhehui; Liu, Q.; Waganaar, W.; Fontanese, J.; James, D.; Munsat, T.

    2016-01-01

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  18. Hierarchical one-dimensional ammonium nickel phosphate microrods for high-performance pseudocapacitors

    CSIR Research Space (South Africa)

    Raju, K

    2015-12-01

    Full Text Available :17629 | DOI: 10.1038/srep17629 www.nature.com/scientificreports Hierarchical One-Dimensional Ammonium Nickel Phosphate Microrods for High-Performance Pseudocapacitors Kumar Raju1 & Kenneth I. Ozoemena1,2 High-performance electrochemical capacitors... OPEN w w w . n a t u r e . c o m / s c i e n t i f i c r e p o r t s / 2S C I E N T I F I C REPORTS | 5:17629 | DOI: 10.1038/srep17629 Hierarchical 1-D and 2-D materials maximize the supercapacitive properties due to their unique ability to permit ion...

  19. On the use of multi-dimensional scaling and electromagnetic tracking in high dose rate brachytherapy

    Science.gov (United States)

    Götz, Th I.; Ermer, M.; Salas-González, D.; Kellermeier, M.; Strnad, V.; Bert, Ch; Hensel, B.; Tomé, A. M.; Lang, E. W.

    2017-10-01

    High dose rate brachytherapy affords a frequent reassurance of the precise dwell positions of the radiation source. The current investigation proposes a multi-dimensional scaling transformation of both data sets to estimate dwell positions without any external reference. Furthermore, the related distributions of dwell positions are characterized by uni—or bi—modal heavy—tailed distributions. The latter are well represented by α—stable distributions. The newly proposed data analysis provides dwell position deviations with high accuracy, and, furthermore, offers a convenient visualization of the actual shapes of the catheters which guide the radiation source during the treatment.

  20. High-dimensional data: p >> n in mathematical statistics and bio-medical applications

    OpenAIRE

    Van De Geer, Sara A.; Van Houwelingen, Hans C.

    2004-01-01

    The workshop 'High-dimensional data: p >> n in mathematical statistics and bio-medical applications' was held at the Lorentz Center in Leiden from 9 to 20 September 2002. This special issue of Bernoulli contains a selection of papers presented at that workshop. ¶ The introduction of high-throughput micro-array technology to measure gene-expression levels and the publication of the pioneering paper by Golub et al. (1999) has brought to life a whole new branch of data analysis under the name of...

  1. Experiment of flow regime map and local condensing heat transfer coefficients inside three dimensional inner microfin tubes

    Science.gov (United States)

    Du, Yang; Xin, Ming Dao

    1999-03-01

    This paper developed a new type of three dimensional inner microfin tube. The experimental results of the flow patterns for the horizontal condensation inside these tubes are reported in the paper. The flow patterns for the horizontal condensation inside the new made tubes are divided into annular flow, stratified flow and intermittent flow within the test conditions. The experiments of the local heat transfer coefficients for the different flow patterns have been systematically carried out. The experiments of the local heat transfer coefficients changing with the vapor dryness fraction have also been carried out. As compared with the heat transfer coefficients of the two dimensional inner microfin tubes, those of the three dimensional inner microfin tubes increase 47-127% for the annular flow region, 38-183% for the stratified flow and 15-75% for the intermittent flow, respectively. The enhancement factor of the local heat transfer coefficients is from 1.8-6.9 for the vapor dryness fraction from 0.05 to 1.

  2. Some aspects of the applications of wire chambers in high energy physics experiments at large accelerators

    International Nuclear Information System (INIS)

    Turala, M.

    1982-01-01

    An application of proportional and drift chambers in four large spectrometers at the accelerators of IHEP Serpukhov and CERN Geneva is described. An operation of wire chambers at high intensities and high multiplicities of particles is discussed. The results of investigations of their efficiencies, spatial resolution (for one and two-dimensional readout) and long term stability are presented. Problems of preselection of a given class of events are discussed. The systems for preselection of defined multiplicities or a scattering angle of particles, in which proportional chambers have been used, are described and the results of their application in the real experiments are presented. (author)

  3. Five-dimensional visualization of phase transition in BiNiO3 under high pressure

    International Nuclear Information System (INIS)

    Liu, Yijin; Wang, Junyue; Yang, Wenge; Azuma, Masaki; Mao, Wendy L.

    2014-01-01

    Colossal negative thermal expansion was recently discovered in BiNiO 3 associated with a low density to high density phase transition under high pressure. The varying proportion of co-existing phases plays a key role in the macroscopic behavior of this material. Here, we utilize a recently developed X-ray Absorption Near Edge Spectroscopy Tomography method and resolve the mixture of high/low pressure phases as a function of pressure at tens of nanometer resolution taking advantage of the charge transfer during the transition. This five-dimensional (X, Y, Z, energy, and pressure) visualization of the phase boundary provides a high resolution method to study the interface dynamics of high/low pressure phase

  4. Characterization of differentially expressed genes using high-dimensional co-expression networks

    DEFF Research Database (Denmark)

    Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.

    2010-01-01

    We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...

  5. High-resolution coherent three-dimensional spectroscopy of Br2.

    Science.gov (United States)

    Chen, Peter C; Wells, Thresa A; Strangfeld, Benjamin R

    2013-07-25

    In the past, high-resolution spectroscopy has been limited to small, simple molecules that yield relatively uncongested spectra. Larger and more complex molecules have a higher density of peaks and are susceptible to complications (e.g., effects from conical intersections) that can obscure the patterns needed to resolve and assign peaks. Recently, high-resolution coherent two-dimensional (2D) spectroscopy has been used to resolve and sort peaks into easily identifiable patterns for molecules where pattern-recognition has been difficult. For very highly congested spectra, however, the ability to resolve peaks using coherent 2D spectroscopy is limited by the bandwidth of instrumentation. In this article, we introduce and investigate high-resolution coherent three-dimensional spectroscopy (HRC3D) as a method for dealing with heavily congested systems. The resulting patterns are unlike those in high-resolution coherent 2D spectra. Analysis of HRC3D spectra could provide a means for exploring the spectroscopy of large and complex molecules that have previously been considered too difficult to study.

  6. Three-dimensional graphene/polyaniline composite material for high-performance supercapacitor applications

    International Nuclear Information System (INIS)

    Liu, Huili; Wang, Yi; Gou, Xinglong; Qi, Tao; Yang, Jun; Ding, Yulong

    2013-01-01

    Highlights: ► A novel 3D graphene showed high specific surface area and large mesopore volume. ► Aniline monomer was polymerized in the presence of 3D graphene at room temperature. ► The supercapacitive properties were studied by CV and charge–discharge tests. ► The composite show a high gravimetric capacitance and good cyclic stability. ► The 3D graphene/polyaniline has never been report before our work. -- Abstract: A novel three-dimensional (3D) graphene/polyaniline nanocomposite material which is synthesized using in situ polymerization of aniline monomer on the graphene surface is reported as an electrode for supercapacitors. The morphology and structure of the material are characterized by scanning electron microscopy (SEM), transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD). The electrochemical properties of the resulting materials are systematically studied using cyclic voltammetry (CV) and constant current charge–discharge tests. A high gravimetric capacitance of 463 F g −1 at a scan rate of 1 mV s −1 is obtained by means of CVs with 3 mol L −1 KOH as the electrolyte. In addition, the composite material shows only 9.4% capacity loss after 500 cycles, indicating better cyclic stability for supercapacitor applications. The high specific surface area, large mesopore volume and three-dimensional nanoporous structure of 3D graphene could contribute to the high specific capacitance and good cyclic life

  7. Three-Dimensional Numerical Analysis of an Operating Helical Rotor Pump at High Speeds and High Pressures including Cavitation

    Directory of Open Access Journals (Sweden)

    Zhou Yang

    2017-01-01

    Full Text Available High pressures, high speeds, low noise and miniaturization is the direction of development in hydraulic pump. According to the development trend, an operating helical rotor pump (HRP at high speeds and high pressures has been designed and produced, which rotational speed can reach 12000r/min and outlet pressure is as high as 25MPa. Three-dimensional simulation with and without cavitation inside the HRP is completed by the means of the computational fluid dynamics (CFD in this paper, which contributes to understand the complex fluid flow inside it. Moreover, the influences of the rotational speeds of the HRP with and without cavitation has been simulated at 25MPa.

  8. Transverse Kerr effect in one-dimensional magnetophotonic crystals: Experiment and theory

    International Nuclear Information System (INIS)

    Erokhin, S.; Boriskina, Yu.; Vinogradov, A.; Inoue, M.; Kobayashi, D.; Fedyanin, A.; Gan'shina, E.; Kochneva, M.; Granovsky, A.

    2006-01-01

    Magneto-optical transverse Kerr and Faraday effects are studied experimentally and theoretically in one-dimensional magnetophotonic crystals fabricated from a stack of four repetitions of layers of Bi-substituted yttrium iron garnet and SiO 2 layers. The results of theoretical calculations in the framework of modified matrices approach are consistent with the obtained experimental data with the exception of the one cusp at 480 nm in the transverse Kerr effect spectra. Possible mechanisms of this disagreement are discussed

  9. Power1D: a Python toolbox for numerical power estimates in experiments involving one-dimensional continua

    Directory of Open Access Journals (Sweden)

    Todd C. Pataky

    2017-07-01

    Full Text Available The unit of experimental measurement in a variety of scientific applications is the one-dimensional (1D continuum: a dependent variable whose value is measured repeatedly, often at regular intervals, in time or space. A variety of software packages exist for computing continuum-level descriptive statistics and also for conducting continuum-level hypothesis testing, but very few offer power computing capabilities, where ‘power’ is the probability that an experiment will detect a true continuum signal given experimental noise. Moreover, no software package yet exists for arbitrary continuum-level signal/noise modeling. This paper describes a package called power1d which implements (a two analytical 1D power solutions based on random field theory (RFT and (b a high-level framework for computational power analysis using arbitrary continuum-level signal/noise modeling. First power1d’s two RFT-based analytical solutions are numerically validated using its random continuum generators. Second arbitrary signal/noise modeling is demonstrated to show how power1d can be used for flexible modeling well beyond the assumptions of RFT-based analytical solutions. Its computational demands are non-excessive, requiring on the order of only 30 s to execute on standard desktop computers, but with approximate solutions available much more rapidly. Its broad signal/noise modeling capabilities along with relatively rapid computations imply that power1d may be a useful tool for guiding experimentation involving multiple measurements of similar 1D continua, and in particular to ensure that an adequate number of measurements is made to detect assumed continuum signals.

  10. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  11. On-chip generation of high-dimensional entangled quantum states and their coherent control.

    Science.gov (United States)

    Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto

    2017-06-28

    Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.

  12. Pure Cs4PbBr6: Highly Luminescent Zero-Dimensional Perovskite Solids

    KAUST Repository

    Saidaminov, Makhsud I.

    2016-09-26

    So-called zero-dimensional perovskites, such as Cs4PbBr6, promise outstanding emissive properties. However, Cs4PbBr6 is mostly prepared by melting of precursors that usually leads to a coformation of undesired phases. Here, we report a simple low-temperature solution-processed synthesis of pure Cs4PbBr6 with remarkable emission properties. We found that pure Cs4PbBr6 in solid form exhibits a 45% photoluminescence quantum yield (PLQY), in contrast to its three-dimensional counterpart, CsPbBr3, which exhibits more than 2 orders of magnitude lower PLQY. Such a PLQY of Cs4PbBr6 is significantly higher than that of other solid forms of lower-dimensional metal halide perovskite derivatives and perovskite nanocrystals. We attribute this dramatic increase in PL to the high exciton binding energy, which we estimate to be ∼353 meV, likely induced by the unique Bergerhoff–Schmitz–Dumont-type crystal structure of Cs4PbBr6, in which metal-halide-comprised octahedra are spatially confined. Our findings bring this class of perovskite derivatives to the forefront of color-converting and light-emitting applications.

  13. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huttmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.; Bednarczyk, P.

    1992-01-01

    High resolution γ-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig

  14. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  15. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S; Huttmeier, U J; France, G de; Haas, B; Romain, P; Theisen, Ch; Vivien, J P; Zen, J [Centre National de la Recherche Scientifique (CNRS), 67 - Strasbourg (France); Bednarczyk, P [Institute of Nuclear Physics, Cracow (Poland)

    1992-08-01

    High resolution {gamma}-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig.

  16. Highly Efficient Broadband Yellow Phosphor Based on Zero-Dimensional Tin Mixed-Halide Perovskite.

    Science.gov (United States)

    Zhou, Chenkun; Tian, Yu; Yuan, Zhao; Lin, Haoran; Chen, Banghao; Clark, Ronald; Dilbeck, Tristan; Zhou, Yan; Hurley, Joseph; Neu, Jennifer; Besara, Tiglet; Siegrist, Theo; Djurovich, Peter; Ma, Biwu

    2017-12-27

    Organic-inorganic hybrid metal halide perovskites have emerged as a highly promising class of light emitters, which can be used as phosphors for optically pumped white light-emitting diodes (WLEDs). By controlling the structural dimensionality, metal halide perovskites can exhibit tunable narrow and broadband emissions from the free-exciton and self-trapped excited states, respectively. Here, we report a highly efficient broadband yellow light emitter based on zero-dimensional tin mixed-halide perovskite (C 4 N 2 H 14 Br) 4 SnBr x I 6-x (x = 3). This rare-earth-free ionically bonded crystalline material possesses a perfect host-dopant structure, in which the light-emitting metal halide species (SnBr x I 6-x 4- , x = 3) are completely isolated from each other and embedded in the wide band gap organic matrix composed of C 4 N 2 H 14 Br - . The strongly Stokes-shifted broadband yellow emission that peaked at 582 nm from this phosphor, which is a result of excited state structural reorganization, has an extremely large full width at half-maximum of 126 nm and a high photoluminescence quantum efficiency of ∼85% at room temperature. UV-pumped WLEDs fabricated using this yellow emitter together with a commercial europium-doped barium magnesium aluminate blue phosphor (BaMgAl 10 O 17 :Eu 2+ ) can exhibit high color rendering indexes of up to 85.

  17. A high-speed computerized tomography image reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1983-01-01

    The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)

  18. The role of three-dimensional high-definition laparoscopic surgery for gynaecology.

    Science.gov (United States)

    Usta, Taner A; Gundogdu, Elif C

    2015-08-01

    This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.

  19. Collective excitations and superconductivity in reduced dimensional systems - Possible mechanism for high Tc

    International Nuclear Information System (INIS)

    Santoyo, B.M.

    1989-01-01

    The author studies in full detail a possible mechanism of superconductivity in slender electronic systems of finite cross section. This mechanism is based on the pairing interaction mediated by the multiple modes of acoustic plasmons in these structures. First, he shows that multiple non-Landau-damped acoustic plasmon modes exist for electrons in a quasi-one dimensional wire at finite temperatures. These plasmons are of two basic types. The first one is made up by the collective longitudinal oscillations of the electrons essentially of a given transverse energy level oscillating against the electrons in the neighboring transverse energy level. The modes are called Slender Acoustic Plasmons or SAP's. The other mode is the quasi-one dimensional acoustic plasmon mode in which all the electrons oscillate together in phase among themselves but out of phase against the positive ion background. He shows numerically and argues physically that even for a temperature comparable to the mode separation Δω the SAP's and the quasi-one dimensional plasmon persist. Then, based on a clear physical picture, he develops in terms of the dielectric function a theory of superconductivity capable of treating the simultaneous participation of multiple bosonic modes that mediate the pairing interaction. The effect of mode damping is then incorporated in a simple manner that is free of the encumbrance of the strong-coupling, Green's function formalism usually required for the retardation effect. Explicit formulae including such damping are derived for the critical temperature T c and the energy gap Δ 0 . With those modes and armed with such a formalism, he proceeds to investigate a possible superconducting mechanism for high T c in quasi-one dimensional single-wire and multi-wire systems

  20. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input

  1. Two-dimensional numerical experiments with DRIX-2D on two-phase-water-flows referring to the HDR-blowdown-experiments

    International Nuclear Information System (INIS)

    Moesinger, H.

    1979-08-01

    The computer program DRIX-2D has been developed from SOLA-DF. The essential elements of the program structure are described. In order to verify DRIX-2D an Edwards-Blowdown-Experiment is calculated and other numerical results are compared with steady state experiments and models. Numerical experiments on transient two-phase flow, occurring in the broken pipe of a PWR in the case of a hypothetic LOCA, are performed. The essential results of the two-dimensional calculations are: 1. The appearance of a radial profile of void-fraction, velocity, sound speed and mass flow-rate inside the blowdown nozzle. The reason for this is the flow contraction at the nozzle inlet leading to more vapour production in the vicinity of the pipe wall. 2. A comparison between modelling in axisymmetric and Cartesian coordinates and calculations with and without the core barrel show the following: a) The three-dimensional flow pattern at the nozzle inlet is poorly described using Cartesian coordinates. In consequence a considerable difference in pressure history results. b) The core barrel alters the reflection behaviour of the pressure waves oscillating in the blowdown-nozzle. Therefore, the core barrel should be modelled as a wall normal to the nozzle axis. (orig./HP) [de

  2. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and “hidden” dimensions

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D.; Ridge, Clark; Shaka, A. J.

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to “reduced-dimensionality” strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the Filter Diagonalization Method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths. PMID:18926747

  3. Simulating three-dimensional nonthermal high-energy photon emission in colliding-wind binaries

    Energy Technology Data Exchange (ETDEWEB)

    Reitberger, K.; Kissmann, R.; Reimer, A.; Reimer, O., E-mail: klaus.reitberger@uibk.ac.at [Institut für Astro- und Teilchenphysik and Institut für Theoretische Physik, Leopold-Franzens-Universität Innsbruck, A-6020 Innsbruck (Austria)

    2014-07-01

    Massive stars in binary systems have long been regarded as potential sources of high-energy γ rays. The emission is principally thought to arise in the region where the stellar winds collide and accelerate relativistic particles which subsequently emit γ rays. On the basis of a three-dimensional distribution function of high-energy particles in the wind collision region—as obtained by a numerical hydrodynamics and particle transport model—we present the computation of the three-dimensional nonthermal photon emission for a given line of sight. Anisotropic inverse Compton emission is modeled using the target radiation field of both stars. Photons from relativistic bremsstrahlung and neutral pion decay are computed on the basis of local wind plasma densities. We also consider photon-photon opacity effects due to the dense radiation fields of the stars. Results are shown for different stellar separations of a given binary system comprising of a B star and a Wolf-Rayet star. The influence of orbital orientation with respect to the line of sight is also studied by using different orbital viewing angles. For the chosen electron-proton injection ratio of 10{sup –2}, we present the ensuing photon emission in terms of two-dimensional projections maps, spectral energy distributions, and integrated photon flux values in various energy bands. Here, we find a transition from hadron-dominated to lepton-dominated high-energy emission with increasing stellar separations. In addition, we confirm findings from previous analytic modeling that the spectral energy distribution varies significantly with orbital orientation.

  4. High-speed three-dimensional plasma temperature determination of axially symmetric free-burning arcs

    International Nuclear Information System (INIS)

    Bachmann, B; Ekkert, K; Bachmann, J-P; Marques, J-L; Schein, J; Kozakov, R; Gött, G; Schöpp, H; Uhrlandt, D

    2013-01-01

    In this paper we introduce an experimental technique that allows for high-speed, three-dimensional determination of electron density and temperature in axially symmetric free-burning arcs. Optical filters with narrow spectral bands of 487.5–488.5 nm and 689–699 nm are utilized to gain two-dimensional spectral information of a free-burning argon tungsten inert gas arc. A setup of mirrors allows one to image identical arc sections of the two spectral bands onto a single camera chip. Two-different Abel inversion algorithms have been developed to reconstruct the original radial distribution of emission coefficients detected with each spectral window and to confirm the results. With the assumption of local thermodynamic equilibrium we calculate emission coefficients as a function of temperature by application of the Saha equation, the ideal gas law, the quasineutral gas condition and the NIST compilation of spectral lines. Ratios of calculated emission coefficients are compared with measured ones yielding local plasma temperatures. In the case of axial symmetry the three-dimensional plasma temperature distributions have been determined at dc currents of 100, 125, 150 and 200 A yielding temperatures up to 20000 K in the hot cathode region. These measurements have been validated by four different techniques utilizing a high-resolution spectrometer at different positions in the plasma. Plasma temperatures show good agreement throughout the different methods. Additionally spatially resolved transient plasma temperatures have been measured of a dc pulsed process employing a high-speed frame rate of 33000 frames per second showing the modulation of the arc isothermals with time and providing information about the sensitivity of the experimental approach. (paper)

  5. Data acquisition systems for high energy physics experiments

    International Nuclear Information System (INIS)

    Duran, I.; Olmos, P.

    1986-01-01

    The Data Acquisition Systems most frequently used in High Energy Physics experiments is described. This report begins with a brief description of the main elements of a typical signal processing chain, following with a detailed exposition of the four most popular instrumentation standards used in this kind of experiments: NIM, CAMAC, and VMI. (author). 20 figs., 9 ref

  6. THREE-DIMENSIONAL OBSERVATIONS ON THICK BIOLOGICAL SPECIMENS BY HIGH VOLTAGE ELECTRON MICROSCOPY

    Directory of Open Access Journals (Sweden)

    Tetsuji Nagata

    2011-05-01

    Full Text Available Thick biological specimens prepared as whole mount cultured cells or thick sections from embedded tissues were stained with histochemical reactions, such as thiamine pyrophosphatase, glucose-6-phosphatase, cytochrome oxidase, acid phosphatase, DAB reactions and radioautography, to observe 3-D ultrastructures of cell organelles producing stereo-pairs by high voltage electron microscopy at accerelating voltages of 400-1000 kV. The organelles demonstrated were Golgi apparatus, endoplasmic reticulum, mitochondria, lysosomes, peroxisomes, pinocytotic vesicles and incorporations of radioactive compounds. As the results, those cell organelles were observed 3- dimensionally and the relative relationships between these organelles were demonstrated.

  7. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    Science.gov (United States)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  8. The high exponent limit $p \\to \\infty$ for the one-dimensional nonlinear wave equation

    OpenAIRE

    Tao, Terence

    2009-01-01

    We investigate the behaviour of solutions $\\phi = \\phi^{(p)}$ to the one-dimensional nonlinear wave equation $-\\phi_{tt} + \\phi_{xx} = -|\\phi|^{p-1} \\phi$ with initial data $\\phi(0,x) = \\phi_0(x)$, $\\phi_t(0,x) = \\phi_1(x)$, in the high exponent limit $p \\to \\infty$ (holding $\\phi_0, \\phi_1$ fixed). We show that if the initial data $\\phi_0, \\phi_1$ are smooth with $\\phi_0$ taking values in $(-1,1)$ and obey a mild non-degeneracy condition, then $\\phi$ converges locally uniformly to a piecewis...

  9. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    International Nuclear Information System (INIS)

    Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang

    2016-01-01

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  10. Quasi-two-dimensional metallic hydrogen in diphosphide at a high pressure

    International Nuclear Information System (INIS)

    Degtyarenko, N. N.; Mazur, E. A.

    2016-01-01

    The structural, electronic, phonon, and other characteristics of the normal phases of phosphorus hydrides with stoichiometry PH k are analyzed. The properties of the initial substance, namely, diphosphine are calculated. In contrast to phosphorus hydrides with stoichiometry PH 3 , a quasi-two-dimensional phosphorus-stabilized lattice of metallic hydrogen can be formed in this substance during hydrostatic compression at a high pressure. The formed structure with H–P–H elements is shown to be locally stable in phonon spectrum, i.e., to be metastable. The properties of diphosphine are compared with the properties of similar structures of sulfur hydrides.

  11. Two-dimensional gold nanostructures with high activity for selective oxidation of carbon–hydrogen bonds

    KAUST Repository

    Wang, Liang

    2015-04-22

    Efficient synthesis of stable two-dimensional (2D) noble metal catalysts is a challenging topic. Here we report the facile synthesis of 2D gold nanosheets via a wet chemistry method, by using layered double hydroxide as the template. Detailed characterization with electron microscopy and X-ray photoelectron spectroscopy demonstrates that the nanosheets are negatively charged and [001] oriented with thicknesses varying from single to a few atomic layers. X-ray absorption spectroscopy reveals unusually low gold–gold coordination numbers. These gold nanosheets exhibit high catalytic activity and stability in the solvent-free selective oxidation of carbon–hydrogen bonds with molecular oxygen.

  12. Electric Field Guided Assembly of One-Dimensional Nanostructures for High Performance Sensors

    Directory of Open Access Journals (Sweden)

    Wing Kam Liu

    2012-05-01

    Full Text Available Various nanowire or nanotube-based devices have been demonstrated to fulfill the anticipated future demands on sensors. To fabricate such devices, electric field-based methods have demonstrated a great potential to integrate one-dimensional nanostructures into various forms. This review paper discusses theoretical and experimental aspects of the working principles, the assembled structures, and the unique functions associated with electric field-based assembly. The challenges and opportunities of the assembly methods are addressed in conjunction with future directions toward high performance sensors.

  13. High-dimensional chaos from self-sustained collisions of solitons

    Energy Technology Data Exchange (ETDEWEB)

    Yildirim, O. Ozgur, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Cavium, Inc., 600 Nickerson Rd., Marlborough, Massachusetts 01752 (United States); Ham, Donhee, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Harvard University, 33 Oxford St., Cambridge, Massachusetts 02138 (United States)

    2014-06-16

    We experimentally demonstrate chaos generation based on collisions of electrical solitons on a nonlinear transmission line. The nonlinear line creates solitons, and an amplifier connected to it provides gain to these solitons for their self-excitation and self-sustenance. Critically, the amplifier also provides a mechanism to enable and intensify collisions among solitons. These collisional interactions are of intrinsically nonlinear nature, modulating the phase and amplitude of solitons, thus causing chaos. This chaos generated by the exploitation of the nonlinear wave phenomena is inherently high-dimensional, which we also demonstrate.

  14. Inferring biological tasks using Pareto analysis of high-dimensional data.

    Science.gov (United States)

    Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri

    2015-03-01

    We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.

  15. A novel algorithm of artificial immune system for high-dimensional function numerical optimization

    Institute of Scientific and Technical Information of China (English)

    DU Haifeng; GONG Maoguo; JIAO Licheng; LIU Ruochen

    2005-01-01

    Based on the clonal selection theory and immune memory theory, a novel artificial immune system algorithm, immune memory clonal programming algorithm (IMCPA), is put forward. Using the theorem of Markov chain, it is proved that IMCPA is convergent. Compared with some other evolutionary programming algorithms (like Breeder genetic algorithm), IMCPA is shown to be an evolutionary strategy capable of solving complex machine learning tasks, like high-dimensional function optimization, which maintains the diversity of the population and avoids prematurity to some extent, and has a higher convergence speed.

  16. Three-dimensional propagation and absorption of high frequency Gaussian beams in magnetoactive plasmas

    International Nuclear Information System (INIS)

    Nowak, S.; Orefice, A.

    1994-01-01

    In today's high frequency systems employed for plasma diagnostics, power heating, and current drive the behavior of the wave beams is appreciably affected by the self-diffraction phenomena due to their narrow collimation. In the present article the three-dimensional propagation of Gaussian beams in inhomogeneous and anisotropic media is analyzed, starting from a properly formulated dispersion relation. Particular attention is paid, in the case of electromagnetic electron cyclotron (EC) waves, to the toroidal geometry characterizing tokamak plasmas, to the power density evolution on the advancing wave fronts, and to the absorption features occurring when a beam crosses an EC resonant layer

  17. Computing and visualizing time-varying merge trees for high-dimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  18. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Hoang, Ngoc-Tram D. [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Nguyen, Duy-Anh P. [Department of Natural Science, Thu Dau Mot University, 6, Tran Van On Street, Thu Dau Mot City, Binh Duong Province (Viet Nam); Hoang, Van-Hung [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Le, Van-Hoang, E-mail: levanhoang@tdt.edu.vn [Atomic Molecular and Optical Physics Research Group, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Faculty of Applied Sciences, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam)

    2016-08-15

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  19. Two-dimensional gold nanostructures with high activity for selective oxidation of carbon-hydrogen bonds

    Science.gov (United States)

    Wang, Liang; Zhu, Yihan; Wang, Jian-Qiang; Liu, Fudong; Huang, Jianfeng; Meng, Xiangju; Basset, Jean-Marie; Han, Yu; Xiao, Feng-Shou

    2015-04-01

    Efficient synthesis of stable two-dimensional (2D) noble metal catalysts is a challenging topic. Here we report the facile synthesis of 2D gold nanosheets via a wet chemistry method, by using layered double hydroxide as the template. Detailed characterization with electron microscopy and X-ray photoelectron spectroscopy demonstrates that the nanosheets are negatively charged and [001] oriented with thicknesses varying from single to a few atomic layers. X-ray absorption spectroscopy reveals unusually low gold-gold coordination numbers. These gold nanosheets exhibit high catalytic activity and stability in the solvent-free selective oxidation of carbon-hydrogen bonds with molecular oxygen.

  20. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    Science.gov (United States)

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  1. Quasi-two-dimensional metallic hydrogen in diphosphide at a high pressure

    Energy Technology Data Exchange (ETDEWEB)

    Degtyarenko, N. N.; Mazur, E. A., E-mail: eugen-mazur@mail.ru [National Research Nuclear University MEPhI (Russian Federation)

    2016-08-15

    The structural, electronic, phonon, and other characteristics of the normal phases of phosphorus hydrides with stoichiometry PH{sub k} are analyzed. The properties of the initial substance, namely, diphosphine are calculated. In contrast to phosphorus hydrides with stoichiometry PH{sub 3}, a quasi-two-dimensional phosphorus-stabilized lattice of metallic hydrogen can be formed in this substance during hydrostatic compression at a high pressure. The formed structure with H–P–H elements is shown to be locally stable in phonon spectrum, i.e., to be metastable. The properties of diphosphine are compared with the properties of similar structures of sulfur hydrides.

  2. Solar array experiments on the SPHINX satellite. [Space Plasma High voltage INteraction eXperiment satellite

    Science.gov (United States)

    Stevens, N. J.

    1974-01-01

    The Space Plasma, High Voltage Interaction Experiment (SPHINX) is the name given to an auxiliary payload satellite scheduled to be launched in January 1974. The principal experiments carried on this satellite are specifically designed to obtain the engineering data on the interaction of high voltage systems with the space plasma. The classes of experiments are solar array segments, insulators, insulators with pin holes and conductors. The satellite is also carrying experiments to obtain flight data on three new solar array configurations: the edge illuminated-multijunction cells, the teflon encased cells, and the violet cells.

  3. Designs for highly nonlinear ablative Rayleigh-Taylor experiments on the National Ignition Facility

    International Nuclear Information System (INIS)

    Casner, A.; Masse, L.; Liberatore, S.; Jacquet, L.; Loiseau, P.; Poujade, O.; Smalyuk, V. A.; Bradley, D. K.; Park, H. S.; Remington, B. A.; Igumenshchev, I.; Chicanne, C.

    2012-01-01

    We present two designs relevant to ablative Rayleigh-Taylor instability in transition from weakly nonlinear to highly nonlinear regimes at the National Ignition Facility [E. I. Moses, J. Phys.: Conf. Ser. 112, 012003 (2008)]. The sensitivity of nonlinear Rayleigh-Taylor instability physics to ablation velocity is addressed with targets driven by indirect drive, with stronger ablative stabilization, and by direct drive, with weaker ablative stabilization. The indirect drive design demonstrates the potential to reach a two-dimensional bubble-merger regime with a 20 ns duration drive at moderate radiation temperature. The direct drive design achieves a 3 to 5 times increased acceleration distance for the sample in comparison to previous experiments allowing at least 2 more bubble generations when starting from a three-dimensional broadband spectrum.

  4. Designs for highly nonlinear ablative Rayleigh-Taylor experiments on the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Casner, A.; Masse, L.; Liberatore, S.; Jacquet, L.; Loiseau, P.; Poujade, O. [CEA, DAM, DIF, F-91297 Arpajon (France); Smalyuk, V. A.; Bradley, D. K.; Park, H. S.; Remington, B. A. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Igumenshchev, I. [Laboratory of Laser Energetics, University of Rochester, Rochester, New York 14623-1299 (United States); Chicanne, C. [CEA, DAM, VALDUC, F-21120 Is-sur-Tille (France)

    2012-08-15

    We present two designs relevant to ablative Rayleigh-Taylor instability in transition from weakly nonlinear to highly nonlinear regimes at the National Ignition Facility [E. I. Moses, J. Phys.: Conf. Ser. 112, 012003 (2008)]. The sensitivity of nonlinear Rayleigh-Taylor instability physics to ablation velocity is addressed with targets driven by indirect drive, with stronger ablative stabilization, and by direct drive, with weaker ablative stabilization. The indirect drive design demonstrates the potential to reach a two-dimensional bubble-merger regime with a 20 ns duration drive at moderate radiation temperature. The direct drive design achieves a 3 to 5 times increased acceleration distance for the sample in comparison to previous experiments allowing at least 2 more bubble generations when starting from a three-dimensional broadband spectrum.

  5. Exploring rural high school learners' experience of mathematics ...

    African Journals Online (AJOL)

    Exploring rural high school learners' experience of mathematics anxiety in ... State using the Statistical Package for the Social Sciences (SPSS), Version 17.0. ... to observe its prevalence and to implement strategies toward the alleviation of the ...

  6. Ben Macdhui High Altitude Trace Gas and Aerosol Transport Experiment

    CSIR Research Space (South Africa)

    Piketh, SJ

    1999-01-01

    Full Text Available The Ben Macdhui High Altitude Aerosol and Trace Gas Transport Experiment (BHATTEX) was started to characterize the nature and magnitude of atmospheric, aerosol and trace gas transport paths recirculation over and exiting from southern Africa...

  7. Three-dimensional interconnected porous graphitic carbon derived from rice straw for high performance supercapacitors

    Science.gov (United States)

    Jin, Hong; Hu, Jingpeng; Wu, Shichao; Wang, Xiaolan; Zhang, Hui; Xu, Hui; Lian, Kun

    2018-04-01

    Three-dimensional interconnected porous graphitic carbon materials are synthesized via a combination of graphitization and activation process with rice straw as the carbon source. The physicochemical properties of the three-dimensional interconnected porous graphitic carbon materials are characterized by Nitrogen adsorption/desorption, Fourier-transform infrared spectroscopy, X-ray diffraction, Raman spectroscopy, Scanning electron microscopy and Transmission electron microscopy. The results demonstrate that the as-prepared carbon is a high surface area carbon material (a specific surface area of 3333 m2 g-1 with abundant mesoporous and microporous structures). And it exhibits superb performance in symmetric double layer capacitors with a high specific capacitance of 400 F g-1 at a current density of 0.1 A g-1, good rate performance with 312 F g-1 under a current density of 5 A g-1 and favorable cycle stability with 6.4% loss after 10000 cycles at a current density of 5 A g-1 in the aqueous electrolyte of 6M KOH. Thus, rice straw is a promising carbon source for fabricating inexpensive, sustainable and high performance supercapacitors' electrode materials.

  8. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2017-04-25

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  9. Assessing the detectability of antioxidants in two-dimensional high-performance liquid chromatography.

    Science.gov (United States)

    Bassanese, Danielle N; Conlan, Xavier A; Barnett, Neil W; Stevenson, Paul G

    2015-05-01

    This paper explores the analytical figures of merit of two-dimensional high-performance liquid chromatography for the separation of antioxidant standards. The cumulative two-dimensional high-performance liquid chromatography peak area was calculated for 11 antioxidants by two different methods--the areas reported by the control software and by fitting the data with a Gaussian model; these methods were evaluated for precision and sensitivity. Both methods demonstrated excellent precision in regards to retention time in the second dimension (%RSD below 1.16%) and cumulative second dimension peak area (%RSD below 3.73% from the instrument software and 5.87% for the Gaussian method). Combining areas reported by the high-performance liquid chromatographic control software displayed superior limits of detection, in the order of 1 × 10(-6) M, almost an order of magnitude lower than the Gaussian method for some analytes. The introduction of the countergradient eliminated the strong solvent mismatch between dimensions, leading to a much improved peak shape and better detection limits for quantification. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems

    Directory of Open Access Journals (Sweden)

    DimitrisG. Stavrakoudis

    2012-04-01

    Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.

  11. Stable high efficiency two-dimensional perovskite solar cells via cesium doping

    KAUST Repository

    Zhang, Xu

    2017-08-15

    Two-dimensional (2D) organic-inorganic perovskites have recently emerged as one of the most important thin-film solar cell materials owing to their excellent environmental stability. The remaining major pitfall is their relatively poor photovoltaic performance in contrast to 3D perovskites. In this work we demonstrate cesium cation (Cs) doped 2D (BA)(MA)PbI perovskite solar cells giving a power conversion efficiency (PCE) as high as 13.7%, the highest among the reported 2D devices, with excellent humidity resistance. The enhanced efficiency from 12.3% (without Cs) to 13.7% (with 5% Cs) is attributed to perfectly controlled crystal orientation, an increased grain size of the 2D planes, superior surface quality, reduced trap-state density, enhanced charge-carrier mobility and charge-transfer kinetics. Surprisingly, it is found that the Cs doping yields superior stability for the 2D perovskite solar cells when subjected to a high humidity environment without encapsulation. The device doped using 5% Cs degrades only ca. 10% after 1400 hours of exposure in 30% relative humidity (RH), and exhibits significantly improved stability under heating and high moisture environments. Our results provide an important step toward air-stable and fully printable low dimensional perovskites as a next-generation renewable energy source.

  12. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    International Nuclear Information System (INIS)

    Wang, Yang; Bao, Wan-Su; Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei

    2017-01-01

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  13. Latent class models for joint analysis of disease prevalence and high-dimensional semicontinuous biomarker data.

    Science.gov (United States)

    Zhang, Bo; Chen, Zhen; Albert, Paul S

    2012-01-01

    High-dimensional biomarker data are often collected in epidemiological studies when assessing the association between biomarkers and human disease is of interest. We develop a latent class modeling approach for joint analysis of high-dimensional semicontinuous biomarker data and a binary disease outcome. To model the relationship between complex biomarker expression patterns and disease risk, we use latent risk classes to link the 2 modeling components. We characterize complex biomarker-specific differences through biomarker-specific random effects, so that different biomarkers can have different baseline (low-risk) values as well as different between-class differences. The proposed approach also accommodates data features that are common in environmental toxicology and other biomarker exposure data, including a large number of biomarkers, numerous zero values, and complex mean-variance relationship in the biomarkers levels. A Monte Carlo EM (MCEM) algorithm is proposed for parameter estimation. Both the MCEM algorithm and model selection procedures are shown to work well in simulations and applications. In applying the proposed approach to an epidemiological study that examined the relationship between environmental polychlorinated biphenyl (PCB) exposure and the risk of endometriosis, we identified a highly significant overall effect of PCB concentrations on the risk of endometriosis.

  14. Three-dimensional laparoscopy vs 2-dimensional laparoscopy with high-definition technology for abdominal surgery: a systematic review.

    Science.gov (United States)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian; Kildebro, Niels; Rosenberg, Jacob

    2017-01-01

    This systematic review investigates newer generation 3-dimensional (3D) laparoscopy vs 2-dimensional (2D) laparoscopy in terms of error rating, performance time, and subjective assessment as early comparisons have shown contradictory results due to technological shortcomings. This systematic review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Randomized controlled trials (RCTs) comparing newer generation 3D-laparoscopy with 2D-laparoscopy were included through searches in Pubmed, EMBASE, and Cochrane Central Register of Controlled Trials database. Of 643 articles, 13 RCTs were included, of which 2 were clinical trials. Nine of 13 trials (69%) and 10 of 13 trials (77%) found a significant reduction in performance time and error, respectively, with the use of 3D-laparoscopy. Overall, 3D-laparoscopy was found to be superior or equal to 2D-laparoscopy. All trials featuring subjective evaluation found a superiority of 3D-laparoscopy. More clinical RCTs are still awaited for the convincing results to be reproduced. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Transverse Kerr effect in one-dimensional magnetophotonic crystals: Experiment and theory

    Energy Technology Data Exchange (ETDEWEB)

    Erokhin, S. [Faculty of Physics, Lomonosov Moscow State University, 11992 Moscow (Russian Federation)]. E-mail: yerokhin@magn.ru; Boriskina, Yu. [Faculty of Physics, Lomonosov Moscow State University, 11992 Moscow (Russian Federation); Vinogradov, A. [Institute for Theoretical and Applied Electrodynamics, Izhorskaya 13/19, 127412 Moscow (Russian Federation); Inoue, M. [Department of Electrical and Electronic Engineering, Toyohashi University of Technology, 1-1 Hibari-Ga-Oka, Tempaku, Toyohashi 441-8580 (Japan); Kobayashi, D. [Department of Electrical and Electronic Engineering, Toyohashi University of Technology, 1-1 Hibari-Ga-Oka, Tempaku, Toyohashi 441-8580 (Japan); Fedyanin, A. [Faculty of Physics, Lomonosov Moscow State University, 11992 Moscow (Russian Federation); Gan' shina, E. [Faculty of Physics, Lomonosov Moscow State University, 11992 Moscow (Russian Federation); Kochneva, M. [Faculty of Physics, Lomonosov Moscow State University, 11992 Moscow (Russian Federation); Granovsky, A. [Faculty of Physics, Lomonosov Moscow State University, 11992 Moscow (Russian Federation)

    2006-05-15

    Magneto-optical transverse Kerr and Faraday effects are studied experimentally and theoretically in one-dimensional magnetophotonic crystals fabricated from a stack of four repetitions of layers of Bi-substituted yttrium iron garnet and SiO{sub 2} layers. The results of theoretical calculations in the framework of modified matrices approach are consistent with the obtained experimental data with the exception of the one cusp at 480 nm in the transverse Kerr effect spectra. Possible mechanisms of this disagreement are discussed.

  16. Unfolding methods in high-energy physics experiments

    International Nuclear Information System (INIS)

    Blobel, V.

    1985-01-01

    Distributions measured in high-energy physics experiments are often distorted or transformed by limited acceptance and finite resolution of the detectors. The unfolding of measured distributions is an important, but due to inherent instabilities a very difficult problem. Methods for unfolding, applicable for the analysis of high-energy physics experiments, and their properties are discussed. An introduction is given to the method of regularization. (orig.)

  17. Unfolding methods in high-energy physics experiments

    International Nuclear Information System (INIS)

    Blobel, V.

    1984-12-01

    Distributions measured in high-energy physics experiments are often distorted or transformed by limited acceptance and finite resolution of the detectors. The unfolding of measured distributions is an important, but due to inherent instabilities a very difficult problem. Methods for unfolding, applicable for the analysis of high-energy physics experiments, and their properties are discussed. An introduction is given to the method of regularization. (orig.)

  18. Modular safety interlock system for high energy physics experiments

    International Nuclear Information System (INIS)

    Kieffer, J.; Golceff, B.V.

    1980-10-01

    A frequent problem in electronics systems for high energy physics experiments is to provide protection for personnel and equipment. Interlock systems are typically designed as an afterthought and as a result, the working environment around complex experiments with many independent high voltages or hazardous gas subsystems, and many different kinds of people involved, can be particularly dangerous. A set of modular hardware has been designed which makes possible a standardized, intergrated, hierarchical system's approach and which can be easily tailored to custom requirements

  19. Measurement model and calibration experiment of over-constrained parallel six-dimensional force sensor based on stiffness characteristics analysis

    International Nuclear Information System (INIS)

    Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua

    2017-01-01

    An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)

  20. A high-order integral solver for scalar problems of diffraction by screens and apertures in three-dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, Oscar P., E-mail: obruno@caltech.edu; Lintner, Stéphane K.

    2013-11-01

    We present a novel methodology for the numerical solution of problems of diffraction by infinitely thin screens in three-dimensional space. Our approach relies on new integral formulations as well as associated high-order quadrature rules. The new integral formulations involve weighted versions of the classical integral operators related to the thin-screen Dirichlet and Neumann problems as well as a generalization to the open-surface problem of the classical Calderón formulae. The high-order quadrature rules we introduce for these operators, in turn, resolve the multiple Green function and edge singularities (which occur at arbitrarily close distances from each other, and which include weakly singular as well as hypersingular kernels) and thus give rise to super-algebraically fast convergence as the discretization sizes are increased. When used in conjunction with Krylov-subspace linear algebra solvers such as GMRES, the resulting solvers produce results of high accuracy in small numbers of iterations for low and high frequencies alike. We demonstrate our methodology with a variety of numerical results for screen and aperture problems at high frequencies—including simulation of classical experiments such as the diffraction by a circular disc (featuring in particular the famous Poisson spot), evaluation of interference fringes resulting from diffraction across two nearby circular apertures, as well as solution of problems of scattering by more complex geometries consisting of multiple scatterers and cavities.

  1. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J. [Strasbourg-1 Univ., 67 (France). Centre de Recherches Nucleaires

    1992-12-31

    A new generation of high resolution {gamma}-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold {gamma}-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs.

  2. Three-dimensional bicontinuous nanoporous Au/polyaniline hybrid films for high-performance electrochemical supercapacitors

    Science.gov (United States)

    Lang, Xingyou; Zhang, Ling; Fujita, Takeshi; Ding, Yi; Chen, Mingwei

    2012-01-01

    We report three-dimensional bicontinuous nanoporous Au/polyaniline (PANI) composite films made by one-step electrochemical polymerization of PANI shell onto dealloyed nanoporous gold (NPG) skeletons for the applications in electrochemical supercapacitors. The NPG/PANI based supercapacitors exhibit ultrahigh volumetric capacitance (∼1500 F cm-3) and energy density (∼0.078 Wh cm-3), which are seven and four orders of magnitude higher than these of electrolytic capacitors, with the same power density up to ∼190 W cm-3. The outstanding capacitive performances result from a novel nanoarchitecture in which pseudocapacitive PANI shells are incorporated into pore channels of highly conductive NPG, making them promising candidates as electrode materials in supercapacitor devices combing high-energy storage densities with high-power delivery.

  3. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.

    1992-01-01

    A new generation of high resolution γ-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold γ-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs

  4. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    International Nuclear Information System (INIS)

    Liu Jizhi; Chen Xingbi

    2009-01-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  5. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)

    2009-12-15

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  6. High-efficiency one-dimensional atom localization via two parallel standing-wave fields

    International Nuclear Information System (INIS)

    Wang, Zhiping; Wu, Xuqiang; Lu, Liang; Yu, Benli

    2014-01-01

    We present a new scheme of high-efficiency one-dimensional (1D) atom localization via measurement of upper state population or the probe absorption in a four-level N-type atomic system. By applying two classical standing-wave fields, the localization peak position and number, as well as the conditional position probability, can be easily controlled by the system parameters, and the sub-half-wavelength atom localization is also observed. More importantly, there is 100% detecting probability of the atom in the subwavelength domain when the corresponding conditions are satisfied. The proposed scheme may open up a promising way to achieve high-precision and high-efficiency 1D atom localization. (paper)

  7. High-resolution and high-throughput multichannel Fourier transform spectrometer with two-dimensional interferogram warping compensation

    Science.gov (United States)

    Watanabe, A.; Furukawa, H.

    2018-04-01

    The resolution of multichannel Fourier transform (McFT) spectroscopy is insufficient for many applications despite its extreme advantage of high throughput. We propose an improved configuration to realise both performance using a two-dimensional area sensor. For the spectral resolution, we obtained the interferogram of a larger optical path difference by shifting the area sensor without altering any optical components. The non-linear phase error of the interferometer was successfully corrected using a phase-compensation calculation. Warping compensation was also applied to realise a higher throughput to accumulate the signal between vertical pixels. Our approach significantly improved the resolution and signal-to-noise ratio by factors of 1.7 and 34, respectively. This high-resolution and high-sensitivity McFT spectrometer will be useful for detecting weak light signals such as those in non-invasive diagnosis.

  8. Multi-dimensional diagnostics of high power ion beams by Arrayed Pinhole Camera System

    International Nuclear Information System (INIS)

    Yasuike, K.; Miyamoto, S.; Shirai, N.; Akiba, T.; Nakai, S.; Imasaki, K.; Yamanaka, C.

    1993-01-01

    The authors developed multi-dimensional beam diagnostics system (with spatially and time resolution). They used newly developed Arrayed Pinhole Camera (APC) for this diagnosis. The APC can get spatial distribution of divergence and flux density. They use two types of particle detectors in this study. The one is CR-39 can get time integrated images. The other one is gated Micro-Channel-Plate (MCP) with CCD camera. It enables time resolving diagnostics. The diagnostics systems have resolution better than 10mrad divergence, 0.5mm spatial resolution on the objects respectively. The time resolving system has 10ns time resolution. The experiments are performed on Reiden-IV and Reiden-SHVS induction linac. The authors get time integrated divergence distributions on Reiden-IV proton beam. They also get time resolved image on Reiden-SHVS

  9. AucPR: An AUC-based approach using penalized regression for disease prediction with high-dimensional omics data

    OpenAIRE

    Yu, Wenbao; Park, Taesung

    2014-01-01

    Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...

  10. High-dimensional free-space optical communications based on orbital angular momentum coding

    Science.gov (United States)

    Zou, Li; Gu, Xiaofan; Wang, Le

    2018-03-01

    In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.

  11. Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate

    Directory of Open Access Journals (Sweden)

    Seokhoon Kim

    2015-01-01

    Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.

  12. The validation and assessment of machine learning: a game of prediction from high-dimensional data.

    Directory of Open Access Journals (Sweden)

    Tune H Pers

    Full Text Available In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often implies that multiple methods are tested and compared on the same set of data. This is particularly difficult in situations that are prone to over-fitting where the number of subjects is low compared to the number of potential predictors. The article presents a game which provides some grounds for conducting a fair model comparison. Each player selects a modeling strategy for predicting individual response from potential predictors. A strictly proper scoring rule, bootstrap cross-validation, and a set of rules are used to make the results obtained with different strategies comparable. To illustrate the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively.

  13. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  14. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  15. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    Science.gov (United States)

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Quantum secret sharing based on modulated high-dimensional time-bin entanglement

    International Nuclear Information System (INIS)

    Takesue, Hiroki; Inoue, Kyo

    2006-01-01

    We propose a scheme for quantum secret sharing (QSS) that uses a modulated high-dimensional time-bin entanglement. By modulating the relative phase randomly by {0,π}, a sender with the entanglement source can randomly change the sign of the correlation of the measurement outcomes obtained by two distant recipients. The two recipients must cooperate if they are to obtain the sign of the correlation, which is used as a secret key. We show that our scheme is secure against intercept-and-resend (IR) and beam splitting attacks by an outside eavesdropper thanks to the nonorthogonality of high-dimensional time-bin entangled states. We also show that a cheating attempt based on an IR attack by one of the recipients can be detected by changing the dimension of the time-bin entanglement randomly and inserting two 'vacant' slots between the packets. Then, cheating attempts can be detected by monitoring the count rate in the vacant slots. The proposed scheme has better experimental feasibility than previously proposed entanglement-based QSS schemes

  17. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    Science.gov (United States)

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  18. Growing three-dimensional biomorphic graphene powders using naturally abundant diatomite templates towards high solution processability.

    Science.gov (United States)

    Chen, Ke; Li, Cong; Shi, Liurong; Gao, Teng; Song, Xiuju; Bachmatiuk, Alicja; Zou, Zhiyu; Deng, Bing; Ji, Qingqing; Ma, Donglin; Peng, Hailin; Du, Zuliang; Rümmeli, Mark Hermann; Zhang, Yanfeng; Liu, Zhongfan

    2016-11-07

    Mass production of high-quality graphene with low cost is the footstone for its widespread practical applications. We present herein a self-limited growth approach for producing graphene powders by a small-methane-flow chemical vapour deposition process on naturally abundant and industrially widely used diatomite (biosilica) substrates. Distinct from the chemically exfoliated graphene, thus-produced biomorphic graphene is highly crystallized with atomic layer-thickness controllability, structural designability and less noncarbon impurities. In particular, the individual graphene microarchitectures preserve a three-dimensional naturally curved surface morphology of original diatom frustules, effectively overcoming the interlayer stacking and hence giving excellent dispersion performance in fabricating solution-processible electrodes. The graphene films derived from as-made graphene powders, compatible with either rod-coating, or inkjet and roll-to-roll printing techniques, exhibit much higher electrical conductivity (∼110,700 S m -1 at 80% transmittance) than previously reported solution-based counterparts. This work thus puts forward a practical route for low-cost mass production of various powdery two-dimensional materials.

  19. TESTING HIGH-DIMENSIONAL COVARIANCE MATRICES, WITH APPLICATION TO DETECTING SCHIZOPHRENIA RISK GENES.

    Science.gov (United States)

    Zhu, Lingxue; Lei, Jing; Devlin, Bernie; Roeder, Kathryn

    2017-09-01

    Scientists routinely compare gene expression levels in cases versus controls in part to determine genes associated with a disease. Similarly, detecting case-control differences in co-expression among genes can be critical to understanding complex human diseases; however statistical methods have been limited by the high dimensional nature of this problem. In this paper, we construct a sparse-Leading-Eigenvalue-Driven (sLED) test for comparing two high-dimensional covariance matrices. By focusing on the spectrum of the differential matrix, sLED provides a novel perspective that accommodates what we assume to be common, namely sparse and weak signals in gene expression data, and it is closely related with Sparse Principal Component Analysis. We prove that sLED achieves full power asymptotically under mild assumptions, and simulation studies verify that it outperforms other existing procedures under many biologically plausible scenarios. Applying sLED to the largest gene-expression dataset obtained from post-mortem brain tissue from Schizophrenia patients and controls, we provide a novel list of genes implicated in Schizophrenia and reveal intriguing patterns in gene co-expression change for Schizophrenia subjects. We also illustrate that sLED can be generalized to compare other gene-gene "relationship" matrices that are of practical interest, such as the weighted adjacency matrices.

  20. Growing three-dimensional biomorphic graphene powders using naturally abundant diatomite templates towards high solution processability

    Science.gov (United States)

    Chen, Ke; Li, Cong; Shi, Liurong; Gao, Teng; Song, Xiuju; Bachmatiuk, Alicja; Zou, Zhiyu; Deng, Bing; Ji, Qingqing; Ma, Donglin; Peng, Hailin; Du, Zuliang; Rümmeli, Mark Hermann; Zhang, Yanfeng; Liu, Zhongfan

    2016-11-01

    Mass production of high-quality graphene with low cost is the footstone for its widespread practical applications. We present herein a self-limited growth approach for producing graphene powders by a small-methane-flow chemical vapour deposition process on naturally abundant and industrially widely used diatomite (biosilica) substrates. Distinct from the chemically exfoliated graphene, thus-produced biomorphic graphene is highly crystallized with atomic layer-thickness controllability, structural designability and less noncarbon impurities. In particular, the individual graphene microarchitectures preserve a three-dimensional naturally curved surface morphology of original diatom frustules, effectively overcoming the interlayer stacking and hence giving excellent dispersion performance in fabricating solution-processible electrodes. The graphene films derived from as-made graphene powders, compatible with either rod-coating, or inkjet and roll-to-roll printing techniques, exhibit much higher electrical conductivity (~110,700 S m-1 at 80% transmittance) than previously reported solution-based counterparts. This work thus puts forward a practical route for low-cost mass production of various powdery two-dimensional materials.

  1. Stable Graphene-Two-Dimensional Multiphase Perovskite Heterostructure Phototransistors with High Gain.

    Science.gov (United States)

    Shao, Yuchuan; Liu, Ye; Chen, Xiaolong; Chen, Chen; Sarpkaya, Ibrahim; Chen, Zhaolai; Fang, Yanjun; Kong, Jaemin; Watanabe, Kenji; Taniguchi, Takashi; Taylor, André; Huang, Jinsong; Xia, Fengnian

    2017-12-13

    Recently, two-dimensional (2D) organic-inorganic perovskites emerged as an alternative material for their three-dimensional (3D) counterparts in photovoltaic applications with improved moisture resistance. Here, we report a stable, high-gain phototransistor consisting of a monolayer graphene on hexagonal boron nitride (hBN) covered by a 2D multiphase perovskite heterostructure, which was realized using a newly developed two-step ligand exchange method. In this phototransistor, the multiple phases with varying bandgap in 2D perovskite thin films are aligned for the efficient electron-hole pair separation, leading to a high responsivity of ∼10 5 A W -1 at 532 nm. Moreover, the designed phase alignment method aggregates more hydrophobic butylammonium cations close to the upper surface of the 2D perovskite thin film, preventing the permeation of moisture and enhancing the device stability dramatically. In addition, faster photoresponse and smaller 1/f noise observed in the 2D perovskite phototransistors indicate a smaller density of deep hole traps in the 2D perovskite thin film compared with their 3D counterparts. These desirable properties not only improve the performance of the phototransistor, but also provide a new direction for the future enhancement of the efficiency of 2D perovskite photovoltaics.

  2. The Pulsed High Density Experiment (PHDX) Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John P. [Univ. of Washington, Seattle, WA (United States); Andreason, Samuel [Univ. of Washington, Seattle, WA (United States)

    2017-04-27

    The purpose of this paper is to present the conclusions that can be drawn from the Field Reversed Configuration (FRC) formation experiments conducted on the Pulsed High Density experiment (PHD) at the University of Washington. The experiment is ongoing. The experimental goal for this first stage of PHD was to generate a stable, high flux (>10 mWb), high energy (>10 KJ) target FRC. Such results would be adequate as a starting point for several later experiments. This work focuses on experimental implementation and the results of the first four month run. Difficulties were encountered due to the initial on-axis plasma ionization source. Flux trapping with this ionization source acting alone was insufficient to accomplish experimental objectives. Additional ionization methods were utilized to overcome this difficulty. A more ideal plasma source layout is suggested and will be explored during a forthcoming work.

  3. The Motivational Appeal of Interactive Storytelling: Towards a Dimensional Model of the User Experience

    Science.gov (United States)

    Roth, Christian; Vorderer, Peter; Klimmt, Christoph

    A conceptual account to the quality of the user experience that interactive storytelling intends to facilitate is introduced. Building on socialscientific research from 'old' entertainment media, the experiential qualities of curiosity, suspense, aesthetic pleasantness, self-enhancement, and optimal task engagement ("flow") are proposed as key elements of a theory of user experience in interactive storytelling. Perspectives for the evolution of the model, research and application are briefly discussed.

  4. A Near-linear Time Approximation Algorithm for Angle-based Outlier Detection in High-dimensional Data

    DEFF Research Database (Denmark)

    Pham, Ninh Dang; Pagh, Rasmus

    2012-01-01

    projection-based technique that is able to estimate the angle-based outlier factor for all data points in time near-linear in the size of the data. Also, our approach is suitable to be performed in parallel environment to achieve a parallel speedup. We introduce a theoretical analysis of the quality...... neighbor are deteriorated in high-dimensional data. Following up on the work of Kriegel et al. (KDD '08), we investigate the use of angle-based outlier factor in mining high-dimensional outliers. While their algorithm runs in cubic time (with a quadratic time heuristic), we propose a novel random......Outlier mining in d-dimensional point sets is a fundamental and well studied data mining task due to its variety of applications. Most such applications arise in high-dimensional domains. A bottleneck of existing approaches is that implicit or explicit assessments on concepts of distance or nearest...

  5. Four-Dimensional CT of the Diaphragm in Children: Initial Experience

    Science.gov (United States)

    2018-01-01

    Objective To evaluate the technical feasibility of four-dimensional (4D) CT for the functional evaluation of the pediatric diaphragm. Materials and Methods In 22 consecutive children (median age 3.5 months, age range 3 days–3 years), 4D CT was performed to assess diaphragm motion. Diaphragm abnormalities were qualitatively evaluated and diaphragm motion was quantitatively measured on 4D CT. Lung density changes between peak inspiration and expiration were measured in the basal lung parenchyma. The diaphragm motions and lung density changes measured on 4D CT were compared between various diaphragm conditions. In 11 of the 22 children, chest sonography was available for comparison. Results Four-dimensional CT demonstrated normal diaphragm (n = 8), paralysis (n = 10), eventration (n = 3), and diffusely decreased motion (n = 1). Chest sonography demonstrated normal diaphragm (n = 2), paralysis (n = 6), eventration (n = 2), and right pleural effusion (n = 1). The sonographic findings were concordant with the 4D CT findings in 90.9% (10/11) of the patients. In diaphragm paralysis, the affected diaphragm motion was significantly decreased compared with the contralateral normal diaphragm motion (−1.1 ± 2.2 mm vs. 7.6 ± 3.8 mm, p = 0.005). The normal diaphragms showed significantly greater motion than the paralyzed diaphragms (4.5 ± 2.1 mm vs. −1.1 ± 2.2 mm, p Hounsfield units [HU] vs. 180 ± 71 HU, p = 0.03), while no significant differences were found between the normal diaphragms and the paralyzed diaphragms (136 ± 66 HU vs. 89 ± 73 HU, p = 0.1) or between the normal diaphragms and the contralateral normal diaphragms in paralysis (136 ± 66 HU vs. 180 ± 71 HU, p = 0.1). Conclusion The functional evaluation of the pediatric diaphragm is feasible with 4D CT in select children. PMID:29354007

  6. Two-dimensional thermal simulations of aluminum and carbon ion strippers for experiments at SPIRAL2 using the highest beam intensities

    International Nuclear Information System (INIS)

    Tahir, N.A.; Kim, V.; Lamour, E.; Lomonosov, I.V.; Piriz, A.R.; Rozet, J.P.; Stöhlker, Th.; Sultanov, V.; Vernhet, D.

    2012-01-01

    In this paper we report on two-dimensional numerical simulations of heating of a rotating, wheel shaped target impacted by the full intensity of the ion beam that will be delivered by the SPIRAL2 facility at Caen, France. The purpose of this work is to study heating of solid targets that will be used to strip the fast ions of SPIRAL2 to the required high charge state for the FISIC (Fast Ion–Slow Ion Collision) experiments. Strippers of aluminum with different emissivities and of carbon are exposed to high beam current of different ion species as oxygen, neon and argon. These studies show that carbon, due to its much higher sublimation temperature and much higher emissivity, is more favorable compared to aluminum. For the highest beam intensities, an aluminum stripper does not survive. However, problem of the induced thermal stresses and long term material fatigue needs to be investigated before a final conclusion can be drawn.

  7. Building a Framework for Engineering Design Experiences in High School

    Science.gov (United States)

    Denson, Cameron D.; Lammi, Matthew

    2014-01-01

    In this article, Denson and Lammi put forth a conceptual framework that will help promote the successful infusion of engineering design experiences into high school settings. When considering a conceptual framework of engineering design in high school settings, it is important to consider the complex issue at hand. For the purposes of this…

  8. High-speed two-dimensional laser scanner based on Bragg gratings stored in photothermorefractive glass.

    Science.gov (United States)

    Yaqoob, Zahid; Arain, Muzammil A; Riza, Nabeel A

    2003-09-10

    A high-speed free-space wavelength-multiplexed optical scanner with high-speed wavelength selection coupled with narrowband volume Bragg gratings stored in photothermorefractive (PTR) glass is reported. The proposed scanner with no moving parts has a modular design with a wide angular scan range, accurate beam pointing, low scanner insertion loss, and two-dimensional beam scan capabilities. We present a complete analysis and design procedure for storing multiple tilted Bragg-grating structures in a single PTR glass volume (for normal incidence) in an optimal fashion. Because the scanner design is modular, many PTR glass volumes (each having multiple tilted Bragg-grating structures) can be stacked together, providing an efficient throughput with operations in both the visible and the infrared (IR) regions. A proof-of-concept experimental study is conducted with four Bragg gratings in independent PTR glass plates, and both visible and IR region scanner operations are demonstrated.

  9. Penalized estimation for competing risks regression with applications to high-dimensional covariates

    DEFF Research Database (Denmark)

    Ambrogi, Federico; Scheike, Thomas H.

    2016-01-01

    of competing events. The direct binomial regression model of Scheike and others (2008. Predicting cumulative incidence probability by direct binomial regression. Biometrika 95: (1), 205-220) is reformulated in a penalized framework to possibly fit a sparse regression model. The developed approach is easily...... Research 19: (1), 29-51), the research regarding competing risks is less developed (Binder and others, 2009. Boosting for high-dimensional time-to-event data with competing risks. Bioinformatics 25: (7), 890-896). The aim of this work is to consider how to do penalized regression in the presence...... implementable using existing high-performance software to do penalized regression. Results from simulation studies are presented together with an application to genomic data when the endpoint is progression-free survival. An R function is provided to perform regularized competing risks regression according...

  10. Graphene quantum dots-three-dimensional graphene composites for high-performance supercapacitors.

    Science.gov (United States)

    Chen, Qing; Hu, Yue; Hu, Chuangang; Cheng, Huhu; Zhang, Zhipan; Shao, Huibo; Qu, Liangti

    2014-09-28

    Graphene quantum dots (GQDs) have been successfully deposited onto the three-dimensional graphene (3DG) by a benign electrochemical method and the ordered 3DG structure remains intact after the uniform deposition of GQDs. In addition, the capacitive properties of the as-formed GQD-3DG composites are evaluated in symmetrical supercapacitors. It is found that the supercapacitor fabricated from the GQD-3DG composite is highly stable and exhibits a high specific capacitance of 268 F g(-1), representing a more than 90% improvement over that of the supercapacitor made from pure 3DG electrodes (136 F g(-1)). Owing to the convenience of the current method, it can be further used in other well-defined electrode materials, such as carbon nanotubes, carbon aerogels and conjugated polymers to improve the performance of the supercapacitors.

  11. A High Sensitivity Three-Dimensional-Shape Sensing Patch Prepared by Lithography and Inkjet Printing

    Directory of Open Access Journals (Sweden)

    Cheng-Yao Lo

    2012-03-01

    Full Text Available A process combining conventional photolithography and a novel inkjet printing method for the manufacture of high sensitivity three-dimensional-shape (3DS sensing patches was proposed and demonstrated. The supporting curvature ranges from 1.41 to 6.24 ´ 10−2 mm−1 and the sensing patch has a thickness of less than 130 μm and 20 ´ 20 mm2 dimensions. A complete finite element method (FEM model with simulation results was calculated and performed based on the buckling of columns and the deflection equation. The results show high compatibility of the drop-on-demand (DOD inkjet printing with photolithography and the interferometer design also supports bi-directional detection of deformation. The 3DS sensing patch can be operated remotely without any power consumption. It provides a novel and alternative option compared with other optical curvature sensors.

  12. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Science.gov (United States)

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  13. Three-dimensional analysis of harmonic generation in high-gain free-electron lasers

    International Nuclear Information System (INIS)

    Huang, Zhirong; Kim, Kwang-Je

    2000-01-01

    In a high-gain free-electron laser (FEL) employing a planar undulator, strong bunching at the fundamental wavelength can drive substantial bunching and power levels at the harmonic frequencies. In this paper we investigate the three-dimensional evolution of harmonic radiation based on the coupled Maxwell-Klimontovich equations that take into account nonlinear harmonic interactions. Each harmonic field is a sum of a linear amplification term and a term driven by nonlinear harmonic interactions. After a certain stage of exponential growth, the dominant nonlinear term is determined by interactions of the lower nonlinear harmonics and the fundamental radiation. As a result, the gain length, transverse profile, and temporal structure of the first few harmonics are eventually governed by those of the fundamental. Transversely coherent third-harmonic radiation power is found to approach 1% of the fundamental power level for current high-gain FEL projects

  14. High-velocity two-phase flow two-dimensional modeling

    International Nuclear Information System (INIS)

    Mathes, R.; Alemany, A.; Thilbault, J.P.

    1995-01-01

    The two-phase flow in the nozzle of a LMMHD (liquid metal magnetohydrodynamic) converter has been studied numerically and experimentally. A two-dimensional model for two-phase flow has been developed including the viscous terms (dragging and turbulence) and the interfacial mass, momentum and energy transfer between the phases. The numerical results were obtained by a finite volume method based on the SIMPLE algorithm. They have been verified by an experimental facility using air-water as a simulation pair and a phase Doppler particle analyzer for velocity and droplet size measurement. The numerical simulation of a lithium-cesium high-temperature pair showed that a nearly homogeneous and isothermal expansion of the two phases is possible with small pressure losses and high kinetic efficiencies. In the throat region a careful profiling is necessary to reduce the inertial effects on the liquid velocity field

  15. High-resolution liquid patterns via three-dimensional droplet shape control.

    Science.gov (United States)

    Raj, Rishi; Adera, Solomon; Enright, Ryan; Wang, Evelyn N

    2014-09-25

    Understanding liquid dynamics on surfaces can provide insight into nature's design and enable fine manipulation capability in biological, manufacturing, microfluidic and thermal management applications. Of particular interest is the ability to control the shape of the droplet contact area on the surface, which is typically circular on a smooth homogeneous surface. Here, we show the ability to tailor various droplet contact area shapes ranging from squares, rectangles, hexagons, octagons, to dodecagons via the design of the structure or chemical heterogeneity on the surface. We simultaneously obtain the necessary physical insights to develop a universal model for the three-dimensional droplet shape by characterizing the droplet side and top profiles. Furthermore, arrays of droplets with controlled shapes and high spatial resolution can be achieved using this approach. This liquid-based patterning strategy promises low-cost fabrication of integrated circuits, conductive patterns and bio-microarrays for high-density information storage and miniaturized biochips and biosensors, among others.

  16. Three-dimensional Force and Kinematic Interactions in V1 Skating at High Speeds.

    Science.gov (United States)

    Stöggl, Thomas; Holmberg, Hans-Christer

    2015-06-01

    To describe the detailed kinetics and kinematics associated with use of the V1 skating technique at high skiing speeds and to identify factors that predict performance. Fifteen elite male cross-country skiers performed an incremental roller-skiing speed test (Vpeak) on a treadmill using the V1 skating technique. Pole and plantar forces and whole-body kinematics were monitored at four submaximal speeds. The propulsive force of the "strong side" pole was greater than that of the "weak side" (P skating at high speeds. The faster skiers exhibit more symmetric leg motion on the "strong" and "weak" sides, as well as more synchronized poling. With respect to methods, the pressure insoles and three-dimensional kinematics in combination with the leg push-off model described here can easily be applied to all skating techniques, aiding in the evaluation of skiing techniques and comparison of effectiveness.

  17. Utilizing HPC Network Technologies in High Energy Physics Experiments

    CERN Document Server

    AUTHOR|(CDS)2088631; The ATLAS collaboration

    2017-01-01

    Because of their performance characteristics high-performance fabrics like Infiniband or OmniPath are interesting technologies for many local area network applications, including data acquisition systems for high-energy physics experiments like the ATLAS experiment at CERN. This paper analyzes existing APIs for high-performance fabrics and evaluates their suitability for data acquisition systems in terms of performance and domain applicability. The study finds that existing software APIs for high-performance interconnects are focused on applications in high-performance computing with specific workloads and are not compatible with the requirements of data acquisition systems. To evaluate the use of high-performance interconnects in data acquisition systems a custom library, NetIO, is presented and compared against existing technologies. NetIO has a message queue-like interface which matches the ATLAS use case better than traditional HPC APIs like MPI. The architecture of NetIO is based on a interchangeable bac...

  18. Nano-engineering of three-dimensional core/shell nanotube arrays for high performance supercapacitors

    Science.gov (United States)

    Grote, Fabian; Wen, Liaoyong; Lei, Yong

    2014-06-01

    Large-scale arrays of core/shell nanostructures are highly desirable to enhance the performance of supercapacitors. Here we demonstrate an innovative template-based fabrication technique with high structural controllability, which is capable of synthesizing well-ordered three-dimensional arrays of SnO2/MnO2 core/shell nanotubes for electrochemical energy storage in supercapacitor applications. The SnO2 core is fabricated by atomic layer deposition and provides a highly electrical conductive matrix. Subsequently a thin MnO2 shell is coated by electrochemical deposition onto the SnO2 core, which guarantees a short ion diffusion length within the shell. The core/shell structure shows an excellent electrochemical performance with a high specific capacitance of 910 F g-1 at 1 A g-1 and a good rate capability of remaining 217 F g-1 at 50 A g-1. These results shall pave the way to realize aqueous based asymmetric supercapacitors with high specific power and high specific energy.

  19. Predicting Future High-Cost Schizophrenia Patients Using High-Dimensional Administrative Data

    Directory of Open Access Journals (Sweden)

    Yajuan Wang

    2017-06-01

    Full Text Available BackgroundThe burden of serious and persistent mental illness such as schizophrenia is substantial and requires health-care organizations to have adequate risk adjustment models to effectively allocate their resources to managing patients who are at the greatest risk. Currently available models underestimate health-care costs for those with mental or behavioral health conditions.ObjectivesThe study aimed to develop and evaluate predictive models for identification of future high-cost schizophrenia patients using advanced supervised machine learning methods.MethodsThis was a retrospective study using a payer administrative database. The study cohort consisted of 97,862 patients diagnosed with schizophrenia (ICD9 code 295.* from January 2009 to June 2014. Training (n = 34,510 and study evaluation (n = 30,077 cohorts were derived based on 12-month observation and prediction windows (PWs. The target was average total cost/patient/month in the PW. Three models (baseline, intermediate, final were developed to assess the value of different variable categories for cost prediction (demographics, coverage, cost, health-care utilization, antipsychotic medication usage, and clinical conditions. Scalable orthogonal regression, significant attribute selection in high dimensions method, and random forests regression were used to develop the models. The trained models were assessed in the evaluation cohort using the regression R2, patient classification accuracy (PCA, and cost accuracy (CA. The model performance was compared to the Centers for Medicare & Medicaid Services Hierarchical Condition Categories (CMS-HCC model.ResultsAt top 10% cost cutoff, the final model achieved 0.23 R2, 43% PCA, and 63% CA; in contrast, the CMS-HCC model achieved 0.09 R2, 27% PCA with 45% CA. The final model and the CMS-HCC model identified 33 and 22%, respectively, of total cost at the top 10% cost cutoff.ConclusionUsing advanced feature selection leveraging detailed

  20. High-Level Heteroatom Doped Two-Dimensional Carbon Architectures for Highly Efficient Lithium-Ion Storage

    Directory of Open Access Journals (Sweden)

    Zhijie Wang

    2018-04-01

    Full Text Available In this work, high-level heteroatom doped two-dimensional hierarchical carbon architectures (H-2D-HCA are developed for highly efficient Li-ion storage applications. The achieved H-2D-HCA possesses a hierarchical 2D morphology consisting of tiny carbon nanosheets vertically grown on carbon nanoplates and containing a hierarchical porosity with multiscale pore size. More importantly, the H-2D-HCA shows abundant heteroatom functionality, with sulfur (S doping of 0.9% and nitrogen (N doping of as high as 15.5%, in which the electrochemically active N accounts for 84% of total N heteroatoms. In addition, the H-2D-HCA also has an expanded interlayer distance of 0.368 nm. When used as lithium-ion battery anodes, it shows excellent Li-ion storage performance. Even at a high current density of 5 A g−1, it still delivers a high discharge capacity of 329 mA h g−1 after 1,000 cycles. First principle calculations verifies that such unique microstructure characteristics and high-level heteroatom doping nature can enhance Li adsorption stability, electronic conductivity and Li diffusion mobility of carbon nanomaterials. Therefore, the H-2D-HCA could be promising candidates for next-generation LIB anodes.

  1. COSMO-PAFOG: Three-dimensional fog forecasting with the high-resolution COSMO-model

    Science.gov (United States)

    Hacker, Maike; Bott, Andreas

    2017-04-01

    The presence of fog can have critical impact on shipping, aviation and road traffic increasing the risk of serious accidents. Besides these negative impacts of fog, in arid regions fog is explored as a supplementary source of water for human settlements. Thus the improvement of fog forecasts holds immense operational value. The aim of this study is the development of an efficient three-dimensional numerical fog forecast model based on a mesoscale weather prediction model for the application in the Namib region. The microphysical parametrization of the one-dimensional fog forecast model PAFOG (PArameterized FOG) is implemented in the three-dimensional nonhydrostatic mesoscale weather prediction model COSMO (COnsortium for Small-scale MOdeling) developed and maintained by the German Meteorological Service. Cloud water droplets are introduced in COSMO as prognostic variables, thus allowing a detailed description of droplet sedimentation. Furthermore, a visibility parametrization depending on the liquid water content and the droplet number concentration is implemented. The resulting fog forecast model COSMO-PAFOG is run with kilometer-scale horizontal resolution. In vertical direction, we use logarithmically equidistant layers with 45 of 80 layers in total located below 2000 m. Model results are compared to satellite observations and synoptic observations of the German Meteorological Service for a domain in the west of Germany, before the model is adapted to the geographical and climatological conditions in the Namib desert. COSMO-PAFOG is able to represent the horizontal structure of fog patches reasonably well. Especially small fog patches typical of radiation fog can be simulated in agreement with observations. Ground observations of temperature are also reproduced. Simulations without the PAFOG microphysics yield unrealistically high liquid water contents. This in turn reduces the radiative cooling of the ground, thus inhibiting nocturnal temperature decrease. The

  2. Position sensitive detection coupled to high-resolution time-of-flight mass spectrometry: Imaging for molecular beam deflection experiments

    International Nuclear Information System (INIS)

    Abd El Rahim, M.; Antoine, R.; Arnaud, L.; Barbaire, M.; Broyer, M.; Clavier, Ch.; Compagnon, I.; Dugourd, Ph.; Maurelli, J.; Rayane, D.

    2004-01-01

    We have developed and tested a high-resolution time-of-flight mass spectrometer coupled to a position sensitive detector for molecular beam deflection experiments. The major achievement of this new spectrometer is to provide a three-dimensional imaging (X and Y positions and time-of-flight) of the ion packet on the detector, with a high acquisition rate and a high resolution on both the mass and the position. The calibration of the experimental setup and its application to molecular beam deflection experiments are discussed

  3. High-dimensional gene expression profiling studies in high and low responders to primary smallpox vaccination.

    Science.gov (United States)

    Haralambieva, Iana H; Oberg, Ann L; Dhiman, Neelam; Ovsyannikova, Inna G; Kennedy, Richard B; Grill, Diane E; Jacobson, Robert M; Poland, Gregory A

    2012-11-15

    The mechanisms underlying smallpox vaccine-induced variations in immune responses are not well understood, but are of considerable interest to a deeper understanding of poxvirus immunity and correlates of protection. We assessed transcriptional messenger RNA expression changes in 197 recipients of primary smallpox vaccination representing the extremes of humoral and cellular immune responses. The 20 most significant differentially expressed genes include a tumor necrosis factor-receptor superfamily member, an interferon (IFN) gene, a chemokine gene, zinc finger protein genes, nuclear factors, and histones (P ≤ 1.06E(-20), q ≤ 2.64E(-17)). A pathway analysis identified 4 enriched pathways with cytokine production by the T-helper 17 subset of CD4+ T cells being the most significant pathway (P = 3.42E(-05)). Two pathways (antiviral actions of IFNs, P = 8.95E(-05); and IFN-α/β signaling pathway, P = 2.92E(-04)), integral to innate immunity, were enriched when comparing high with low antibody responders (false discovery rate, < 0.05). Genes related to immune function and transcription (TLR8, P = .0002; DAPP1, P = .0003; LAMP3, P = 9.96E(-05); NR4A2, P ≤ .0002; EGR3, P = 4.52E(-05)), and other genes with a possible impact on immunity (LNPEP, P = 3.72E(-05); CAPRIN1, P = .0001; XRN1, P = .0001), were found to be expressed differentially in high versus low antibody responders. We identified novel and known immunity-related genes and pathways that may account for differences in immune response to smallpox vaccination.

  4. Atmospheric and dispersion modeling in areas of highly complex terrain employing a four-dimensional data assimilation technique

    International Nuclear Information System (INIS)

    Fast, J.D.; O'Steen, B.L.

    1994-01-01

    The results of this study indicate that the current data assimilation technique can have a positive impact on the mesoscale flow fields; however, care must be taken in its application to grids of relatively fine horizontal resolution. Continuous FDDA is a useful tool in producing high-resolution mesoscale analysis fields that can be used to (1) create a better initial conditions for mesoscale atmospheric models and (2) drive transport models for dispersion studies. While RAMS is capable of predicting the qualitative flow during this evening, additional experiments need to be performed to improve the prognostic forecasts made by RAMS and refine the FDDA procedure so that the overall errors are reduced even further. Despite the fact that a great deal of computational time is necessary in executing RAMS and LPDM in the configuration employed in this study, recent advances in workstations is making applications such as this more practical. As the speed of these machines increase in the next few years, it will become feasible to employ prognostic, three-dimensional mesoscale/transport models to routinely predict atmospheric dispersion of pollutants, even to highly complex terrain. For example, the version of RAMS in this study could be run in a ''nowcasting'' model that would continually assimilate local and regional observations as soon as they become available. The atmospheric physics in the model would be used to determine the wind field where no observations are available. The three-dimensional flow fields could be used as dynamic initial conditions for a model forecast. The output from this type of modeling system will have to be compared to existing diagnostic, mass-consistent models to determine whether the wind field and dispersion forecasts are significantly improved

  5. The songbird syrinx morphome: a three-dimensional, high-resolution, interactive morphological map of the zebra finch vocal organ

    Directory of Open Access Journals (Sweden)

    Düring Daniel N

    2013-01-01

    Full Text Available Abstract Background Like human infants, songbirds learn their species-specific vocalizations through imitation learning. The birdsong system has emerged as a widely used experimental animal model for understanding the underlying neural mechanisms responsible for vocal production learning. However, how neural impulses are translated into the precise motor behavior of the complex vocal organ (syrinx to create song is poorly understood. First and foremost, we lack a detailed understanding of syringeal morphology. Results To fill this gap we combined non-invasive (high-field magnetic resonance imaging and micro-computed tomography and invasive techniques (histology and micro-dissection to construct the annotated high-resolution three-dimensional dataset, or morphome, of the zebra finch (Taeniopygia guttata syrinx. We identified and annotated syringeal cartilage, bone and musculature in situ in unprecedented detail. We provide interactive three-dimensional models that greatly improve the communication of complex morphological data and our understanding of syringeal function in general. Conclusions Our results show that the syringeal skeleton is optimized for low weight driven by physiological constraints on song production. The present refinement of muscle organization and identity elucidates how apposed muscles actuate different syringeal elements. Our dataset allows for more precise predictions about muscle co-activation and synergies and has important implications for muscle activity and stimulation experiments. We also demonstrate how the syrinx can be stabilized during song to reduce mechanical noise and, as such, enhance repetitive execution of stereotypic motor patterns. In addition, we identify a cartilaginous structure suited to play a crucial role in the uncoupling of sound frequency and amplitude control, which permits a novel explanation of the evolutionary success of songbirds.

  6. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.

    Science.gov (United States)

    Wang, Xueyi

    2012-02-08

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.

  7. Construction of high-dimensional neural network potentials using environment-dependent atom pairs.

    Science.gov (United States)

    Jose, K V Jovan; Artrith, Nongnuch; Behler, Jörg

    2012-05-21

    An accurate determination of the potential energy is the crucial step in computer simulations of chemical processes, but using electronic structure methods on-the-fly in molecular dynamics (MD) is computationally too demanding for many systems. Constructing more efficient interatomic potentials becomes intricate with increasing dimensionality of the potential-energy surface (PES), and for numerous systems the accuracy that can be achieved is still not satisfying and far from the reliability of first-principles calculations. Feed-forward neural networks (NNs) have a very flexible functional form, and in recent years they have been shown to be an accurate tool to construct efficient PESs. High-dimensional NN potentials based on environment-dependent atomic energy contributions have been presented for a number of materials. Still, these potentials may be improved by a more detailed structural description, e.g., in form of atom pairs, which directly reflect the atomic interactions and take the chemical environment into account. We present an implementation of an NN method based on atom pairs, and its accuracy and performance are compared to the atom-based NN approach using two very different systems, the methanol molecule and metallic copper. We find that both types of NN potentials provide an excellent description of both PESs, with the pair-based method yielding a slightly higher accuracy making it a competitive alternative for addressing complex systems in MD simulations.

  8. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  9. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    Science.gov (United States)

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  10. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  11. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    Science.gov (United States)

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer

  12. Key factors for a high-quality VR experience

    Science.gov (United States)

    Champel, Mary-Luc; Doré, Renaud; Mollet, Nicolas

    2017-09-01

    For many years, Virtual Reality has been presented as a promising technology that could deliver a truly new experience to users. The media and entertainment industry is now investigating the possibility to offer a video-based VR 360 experience. Nevertheless, there is a substantial risk that VR 360 could have the same fate as 3DTV if it cannot offer more than just being the next fad. The present paper aims at presenting the various quality factors required for a high-quality VR experience. More specifically, this paper will focus on the main three VR quality pillars: visual, audio and immersion.

  13. Thermophysical and Mechanical Properties of Granite and Its Effects on Borehole Stability in High Temperature and Three-Dimensional Stress

    Directory of Open Access Journals (Sweden)

    Wang Yu

    2014-01-01

    Full Text Available When exploiting the deep resources, the surrounding rock readily undergoes the hole shrinkage, borehole collapse, and loss of circulation under high temperature and high pressure. A series of experiments were conducted to discuss the compressional wave velocity, triaxial strength, and permeability of granite cored from 3500 meters borehole under high temperature and three-dimensional stress. In light of the coupling of temperature, fluid, and stress, we get the thermo-fluid-solid model and governing equation. ANSYS-APDL was also used to stimulate the temperature influence on elastic modulus, Poisson ratio, uniaxial compressive strength, and permeability. In light of the results, we establish a temperature-fluid-stress model to illustrate the granite’s stability. The compressional wave velocity and elastic modulus, decrease as the temperature rises, while poisson ratio and permeability of granite increase. The threshold pressure and temperature are 15 MPa and 200°C, respectively. The temperature affects the fracture pressure more than the collapse pressure, but both parameters rise with the increase of temperature. The coupling of thermo-fluid-solid, greatly impacting the borehole stability, proves to be a good method to analyze similar problems of other formations.

  14. Thermophysical and mechanical properties of granite and its effects on borehole stability in high temperature and three-dimensional stress.

    Science.gov (United States)

    Wang, Yu; Liu, Bao-lin; Zhu, Hai-yan; Yan, Chuan-liang; Li, Zhi-jun; Wang, Zhi-qiao

    2014-01-01

    When exploiting the deep resources, the surrounding rock readily undergoes the hole shrinkage, borehole collapse, and loss of circulation under high temperature and high pressure. A series of experiments were conducted to discuss the compressional wave velocity, triaxial strength, and permeability of granite cored from 3500 meters borehole under high temperature and three-dimensional stress. In light of the coupling of temperature, fluid, and stress, we get the thermo-fluid-solid model and governing equation. ANSYS-APDL was also used to stimulate the temperature influence on elastic modulus, Poisson ratio, uniaxial compressive strength, and permeability. In light of the results, we establish a temperature-fluid-stress model to illustrate the granite's stability. The compressional wave velocity and elastic modulus, decrease as the temperature rises, while poisson ratio and permeability of granite increase. The threshold pressure and temperature are 15 MPa and 200 °C, respectively. The temperature affects the fracture pressure more than the collapse pressure, but both parameters rise with the increase of temperature. The coupling of thermo-fluid-solid, greatly impacting the borehole stability, proves to be a good method to analyze similar problems of other formations.

  15. Accuracy and initial clinical experience with measurement software (advanced vessel analysis) in three-dimensional imaging

    International Nuclear Information System (INIS)

    Abe, Toshi; Hirohata, Masaru; Tanigawa, Hitoshi

    2002-01-01

    Recently, the clinical benefits of three dimensional (3D) imaging, such as 3D-CTA and 3D-DSA, in cerebro-vascular disease have been widely recognized. Software for quantitative analysis of vascular structure in 3D imaging (advanced vessel analysis: AVA) has been developed. We evaluated AVA with both phantom studies and a few clinical cases. In spiral and curvy aluminum tube phantom studies, the accuracy of diameter measurements was good in 3D images produced from data set generated by multi-detector row CT or rotational angiography. The measurement error was less than 0.03 mm on aluminum tube phantoms that were 3 mm and 5 mm in diameter. In the clinical studies, the differences of carotid artery diameter measurements between 2D-DSA and 3D-DSA was less than 0.3 mm in. The measurement of length, diameter and angle by AVA should provide useful information for planning surgical and endovascular treatments of cerebro-vascular disease. (author)

  16. High thermoelectric power factor in two-dimensional crystals of Mo S2

    Science.gov (United States)

    Hippalgaonkar, Kedar; Wang, Ying; Ye, Yu; Qiu, Diana Y.; Zhu, Hanyu; Wang, Yuan; Moore, Joel; Louie, Steven G.; Zhang, Xiang

    2017-03-01

    The quest for high-efficiency heat-to-electricity conversion has been one of the major driving forces toward renewable energy production for the future. Efficient thermoelectric devices require high voltage generation from a temperature gradient and a large electrical conductivity while maintaining a low thermal conductivity. For a given thermal conductivity and temperature, the thermoelectric power factor is determined by the electronic structure of the material. Low dimensionality (1D and 2D) opens new routes to a high power factor due to the unique density of states (DOS) of confined electrons and holes. The 2D transition metal dichalcogenide (TMDC) semiconductors represent a new class of thermoelectric materials not only due to such confinement effects but especially due to their large effective masses and valley degeneracies. Here, we report a power factor of Mo S2 as large as 8.5 mW m-1K-2 at room temperature, which is among the highest measured in traditional, gapped thermoelectric materials. To obtain these high power factors, we perform thermoelectric measurements on few-layer Mo S2 in the metallic regime, which allows us to access the 2D DOS near the conduction band edge and exploit the effect of 2D confinement on electron scattering rates, resulting in a large Seebeck coefficient. The demonstrated high, electronically modulated power factor in 2D TMDCs holds promise for efficient thermoelectric energy conversion.

  17. High temperature graphite irradiation creep experiment in the Dragon Reactor. Dragon Project report

    Energy Technology Data Exchange (ETDEWEB)

    Manzel, R.; Everett, M. R.; Graham, L. W.

    1971-05-15

    The irradiation induced creep of pressed Gilsocarbon graphite under constant tensile stress has been investigated in an experiment carried out in FE 317 of the OECD High Temperature Gass Cooled Reactor ''Dragon'' at Winfrith (England). The experiment covered a temperature range of 850 dec C to 1240 deg C and reached a maximum fast neutron dose of 1.19 x 1021 n cm-2 NDE (Nickel Dose DIDO Equivalent). Irradiation induced dimensional changes of a string of unrestrained graphite specimens are compared with the dimensional changes of three strings of restrained graphite specimens stressed to 40%, 58%, and 70% of the initial ultimate tensile strength of pressed Gilsocarbon graphite. Total creep strains ranging from 0.18% to 1.25% have been measured and a linear dependence of creep strain on applied stress was observed. Mechanical property measurements carried out before and after irradiation demonstrate that Gilsocarbon graphite can accommodate significant creep strains without failure or structural deterioration. Total creep strains are in excellent agreement with other data, however the results indicate a relatively large temperature dependent primary creep component which at 1200 deg C approaches a value which is three times larger than the normally assumed initial elastic strain. Secondary creep constants derived from the experiment show a temperature dependence and are in fair agreement with data reported elsewhere. A possible determination of the results is given.

  18. Rare Particle Searches with the high altitude SLIM experiment

    CERN Document Server

    Balestra, S; Fabbri, F; Giacomelli, G; Giacomelli, R; Giorgini, M; Kumar, A; Manzoor, S; McDonald, J; Margiotta, A; Medinaceli, E; Nogales, J; Patrizii, L; Popa, V; Quereshi, I; Saavedra, O; Sher, G; Shahzad, M; Spurio, M; Ticona, R; Togo, V; Velarde, A; Zanini, A

    2005-01-01

    The search for rare particles in the cosmic radiation remains one of the main aims of non-accelerator particle astrophysics. Experiments at high altitude allow lower mass thresholds with respect to detectors at sea level or underground. The SLIM experiment is a large array of nuclear track detectors located at the Chacaltaya High Altitude Laboratory (5290 m a.s.l.). The preliminary results from the analysis of a part of the first 236 sq.m exposed for more than 3.6 y are here reported. The detector is sensitive to Intermediate Mass Magnetic Monopoles and to SQM nuggets and Q-balls, which are possible Dark Matter candidates.

  19. Effects of SiNx on two-dimensional electron gas and current collapse of AlGaN/GaN high electron mobility transistors

    International Nuclear Information System (INIS)

    Fan, Ren; Zhi-Biao, Hao; Lei, Wang; Lai, Wang; Hong-Tao, Li; Yi, Luo

    2010-01-01

    SiN x is commonly used as a passivation material for AlGaN/GaN high electron mobility transistors (HEMTs). In this paper, the effects of SiN x passivation film on both two-dimensional electron gas characteristics and current collapse of AlGaN/GaN HEMTs are investigated. The SiN x films are deposited by high- and low-frequency plasma-enhanced chemical vapour deposition, and they display different strains on the AlGaN/GaN heterostructure, which can explain the experiment results. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  20. Compilation of current high-energy-physics experiments

    International Nuclear Information System (INIS)

    Wohl, C.G.; Kelly, R.L.; Armstrong, F.E.

    1980-04-01

    This is the third edition of a compilation of current high energy physics experiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and ten participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physics experiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about January 1980, and (2) had not completed taking of data by 1 January 1976

  1. High electro-catalytic activities of glucose oxidase embedded one-dimensional ZnO nanostructures

    International Nuclear Information System (INIS)

    Sarkar, Nirmal K; Bhattacharyya, Swapan K

    2013-01-01

    One-dimensional ZnO nanorods and nanowires are separately synthesized on Zn substrate by simple hydrothermal processes at low temperatures. Electro-catalytic responses of glucose oxidase/ZnO/Zn electrodes using these two synthesized nanostructures of ZnO are reported and compared with others available in literature. It is apparent the Michaelis–Menten constant, K M app , for the present ZnO nanowire, having a greater aspect ratio, is found to be the lowest when compared with others. This sensor shows lower oxidation peak potential with a long detection range of 6.6 μM–380 mM and the highest sensitivity of ∼35.1 μA cm −2 mM −1 , among the reported values in the literature. Enzyme catalytic efficiency and turnover numbers are also found to be remarkably high. (paper)

  2. Resolving molecular vibronic structure using high-sensitivity two-dimensional electronic spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Bizimana, Laurie A.; Brazard, Johanna; Carbery, William P.; Gellen, Tobias; Turner, Daniel B., E-mail: dturner@nyu.edu [Department of Chemistry, New York University, 100 Washington Square East, New York, New York 10003 (United States)

    2015-10-28

    Coherent multidimensional optical spectroscopy is an emerging technique for resolving structure and ultrafast dynamics of molecules, proteins, semiconductors, and other materials. A current challenge is the quality of kinetics that are examined as a function of waiting time. Inspired by noise-suppression methods of transient absorption, here we incorporate shot-by-shot acquisitions and balanced detection into coherent multidimensional optical spectroscopy. We demonstrate that implementing noise-suppression methods in two-dimensional electronic spectroscopy not only improves the quality of features in individual spectra but also increases the sensitivity to ultrafast time-dependent changes in the spectral features. Measurements on cresyl violet perchlorate are consistent with the vibronic pattern predicted by theoretical models of a highly displaced harmonic oscillator. The noise-suppression methods should benefit research into coherent electronic dynamics, and they can be adapted to multidimensional spectroscopies across the infrared and ultraviolet frequency ranges.

  3. Two dimensional code for modeling of high ione cyclotron harmonic fast wave heating and current drive

    International Nuclear Information System (INIS)

    Grekov, D.; Kasilov, S.; Kernbichler, W.

    2016-01-01

    A two dimensional numerical code for computation of the electromagnetic field of a fast magnetosonic wave in a tokamak at high harmonics of the ion cyclotron frequency has been developed. The code computes the finite difference solution of Maxwell equations for separate toroidal harmonics making use of the toroidal symmetry of tokamak plasmas. The proper boundary conditions are prescribed at the realistic tokamak vessel. The currents in the RF antenna are specified externally and then used in Ampere law. The main poloidal tokamak magnetic field and the ''kinetic'' part of the dielectric permeability tensor are treated iteratively. The code has been verified against known analytical solutions and first calculations of current drive in the spherical torus are presented.

  4. Three dimensional imaging of damage in structural materials using high resolution micro-tomography

    Energy Technology Data Exchange (ETDEWEB)

    Buffiere, J.-Y. [GEMPPM UMR CNRS 5510, INSA Lyon, 20 Av. A. Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: jean-yves.buffiere@insa-lyon.fr; Proudhon, H. [GEMPPM UMR CNRS 5510, INSA Lyon, 20 Av. A. Einstein, 69621 Villeurbanne Cedex (France); Ferrie, E. [GEMPPM UMR CNRS 5510, INSA Lyon, 20 Av. A. Einstein, 69621 Villeurbanne Cedex (France); Ludwig, W. [GEMPPM UMR CNRS 5510, INSA Lyon, 20 Av. A. Einstein, 69621 Villeurbanne Cedex (France); Maire, E. [GEMPPM UMR CNRS 5510, INSA Lyon, 20 Av. A. Einstein, 69621 Villeurbanne Cedex (France); Cloetens, P. [ESRF Grenoble (France)

    2005-08-15

    This paper presents recent results showing the ability of high resolution synchrotron X-ray micro-tomography to image damage initiation and development during mechanical loading of structural metallic materials. First, the initiation, growth and coalescence of porosities in the bulk of two metal matrix composites have been imaged at different stages of a tensile test. Quantitative data on damage development has been obtained and related to the nature of the composite matrix. Second, three dimensional images of fatigue crack have been obtained in situ for two different Al alloys submitted to fretting and/or uniaxial in situ fatigue. The analysis of those images shows the strong interaction of the cracks with the local microstructure and provides unique experimental data for modelling the behaviour of such short cracks.

  5. Three dimensional imaging of damage in structural materials using high resolution micro-tomography

    International Nuclear Information System (INIS)

    Buffiere, J.-Y.; Proudhon, H.; Ferrie, E.; Ludwig, W.; Maire, E.; Cloetens, P.

    2005-01-01

    This paper presents recent results showing the ability of high resolution synchrotron X-ray micro-tomography to image damage initiation and development during mechanical loading of structural metallic materials. First, the initiation, growth and coalescence of porosities in the bulk of two metal matrix composites have been imaged at different stages of a tensile test. Quantitative data on damage development has been obtained and related to the nature of the composite matrix. Second, three dimensional images of fatigue crack have been obtained in situ for two different Al alloys submitted to fretting and/or uniaxial in situ fatigue. The analysis of those images shows the strong interaction of the cracks with the local microstructure and provides unique experimental data for modelling the behaviour of such short cracks

  6. Entanglement dynamics of high-dimensional bipartite field states inside the cavities in dissipative environments

    Energy Technology Data Exchange (ETDEWEB)

    Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail [Centre for Quantum Physics, COMSATS Institute of Information Technology, Islamabad (Pakistan); Bougouffa, Smail [Department of Physics, Faculty of Science, Taibah University, PO Box 30002, Madinah (Saudi Arabia)

    2010-02-14

    We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.

  7. Entanglement dynamics of high-dimensional bipartite field states inside the cavities in dissipative environments

    International Nuclear Information System (INIS)

    Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail; Bougouffa, Smail

    2010-01-01

    We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.

  8. Time–energy high-dimensional one-side device-independent quantum key distribution

    International Nuclear Information System (INIS)

    Bao Hai-Ze; Bao Wan-Su; Wang Yang; Chen Rui-Ke; Ma Hong-Xin; Zhou Chun; Li Hong-Wei

    2017-01-01

    Compared with full device-independent quantum key distribution (DI-QKD), one-side device-independent QKD (1sDI-QKD) needs fewer requirements, which is much easier to meet. In this paper, by applying recently developed novel time–energy entropic uncertainty relations, we present a time–energy high-dimensional one-side device-independent quantum key distribution (HD-QKD) and provide the security proof against coherent attacks. Besides, we connect the security with the quantum steering. By numerical simulation, we obtain the secret key rate for Alice’s different detection efficiencies. The results show that our protocol can performance much better than the original 1sDI-QKD. Furthermore, we clarify the relation among the secret key rate, Alice’s detection efficiency, and the dispersion coefficient. Finally, we simply analyze its performance in the optical fiber channel. (paper)

  9. Propagation of Elastic Waves in a One-Dimensional High Aspect Ratio Nanoridge Phononic Crystal

    Directory of Open Access Journals (Sweden)

    Abdellatif Gueddida

    2018-05-01

    Full Text Available We investigate the propagation of elastic waves in a one-dimensional (1D phononic crystal constituted by high aspect ratio epoxy nanoridges that have been deposited at the surface of a glass substrate. With the help of the finite element method (FEM, we calculate the dispersion curves of the modes localized at the surface for propagation both parallel and perpendicular to the nanoridges. When the direction of the wave is parallel to the nanoridges, we find that the vibrational states coincide with the Lamb modes of an infinite plate that correspond to one nanoridge. When the direction of wave propagation is perpendicular to the 1D nanoridges, the localized modes inside the nanoridges give rise to flat branches in the band structure that interact with the surface Rayleigh mode, and possibly open narrow band gaps. Filling the nanoridge structure with a viscous liquid produces new modes that propagate along the 1D finite height multilayer array.

  10. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  11. Advances in high-resolution imaging--techniques for three-dimensional imaging of cellular structures.

    Science.gov (United States)

    Lidke, Diane S; Lidke, Keith A

    2012-06-01

    A fundamental goal in biology is to determine how cellular organization is coupled to function. To achieve this goal, a better understanding of organelle composition and structure is needed. Although visualization of cellular organelles using fluorescence or electron microscopy (EM) has become a common tool for the cell biologist, recent advances are providing a clearer picture of the cell than ever before. In particular, advanced light-microscopy techniques are achieving resolutions below the diffraction limit and EM tomography provides high-resolution three-dimensional (3D) images of cellular structures. The ability to perform both fluorescence and electron microscopy on the same sample (correlative light and electron microscopy, CLEM) makes it possible to identify where a fluorescently labeled protein is located with respect to organelle structures visualized by EM. Here, we review the current state of the art in 3D biological imaging techniques with a focus on recent advances in electron microscopy and fluorescence super-resolution techniques.

  12. Inference for feature selection using the Lasso with high-dimensional data

    DEFF Research Database (Denmark)

    Brink-Jensen, Kasper; Ekstrøm, Claus Thorn

    2014-01-01

    Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...

  13. Efficient High-Dimensional Entanglement Imaging with a Compressive-Sensing Double-Pixel Camera

    Directory of Open Access Journals (Sweden)

    Gregory A. Howland

    2013-02-01

    Full Text Available We implement a double-pixel compressive-sensing camera to efficiently characterize, at high resolution, the spatially entangled fields that are produced by spontaneous parametric down-conversion. This technique leverages sparsity in spatial correlations between entangled photons to improve acquisition times over raster scanning by a scaling factor up to n^{2}/log⁡(n for n-dimensional images. We image at resolutions up to 1024 dimensions per detector and demonstrate a channel capacity of 8.4 bits per photon. By comparing the entangled photons’ classical mutual information in conjugate bases, we violate an entropic Einstein-Podolsky-Rosen separability criterion for all measured resolutions. More broadly, our result indicates that compressive sensing can be especially effective for higher-order measurements on correlated systems.

  14. LABAN-PEL: a two-dimensional, multigroup diffusion, high-order response matrix code

    International Nuclear Information System (INIS)

    Mueller, E.Z.

    1991-06-01

    The capabilities of LABAN-PEL is described. LABAN-PEL is a modified version of the two-dimensional, high-order response matrix code, LABAN, written by Lindahl. The new version extends the capabilities of the original code with regard to the treatment of neutron migration by including an option to utilize full group-to-group diffusion coefficient matrices. In addition, the code has been converted from single to double precision and the necessary routines added to activate its multigroup capability. The coding has also been converted to standard FORTRAN-77 to enhance the portability of the code. Details regarding the input data requirements and calculational options of LABAN-PEL are provided. 13 refs

  15. On mixed derivatives type high dimensional multi-term fractional partial differential equations approximate solutions

    Science.gov (United States)

    Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad

    2017-01-01

    In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.

  16. Three-dimensional Core Design of a Super Fast Reactor with a High Power Density

    International Nuclear Information System (INIS)

    Cao, Liangzhi; Oka, Yoshiaki; Ishiwatari, Yuki; Ikejiri, Satoshi; Ju, Haitao

    2010-01-01

    The SuperCritical Water-cooled Reactor (SCWR) pursues high power density to reduce its capital cost. The fast spectrum SCWR, called a super fast reactor, can be designed with a higher power density than thermal spectrum SCWR. The mechanism of increasing the average power density of the super fast reactor is studied theoretically and numerically. Some key parameters affecting the average power density, including fuel pin outer diameter, fuel pitch, power peaking factor, and the fraction of seed assemblies, are analyzed and optimized to achieve a more compact core. Based on those sensitivity analyses, a compact super fast reactor is successfully designed with an average power density of 294.8 W/cm 3 . The core characteristics are analyzed by using three-dimensional neutronics/thermal-hydraulics coupling method. Numerical results show that all of the design criteria and goals are satisfied

  17. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang; Tong, Tiejun; Genton, Marc G.

    2017-01-01

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  18. Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox

    Science.gov (United States)

    Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas

    In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.

  19. Highly mobile charge-transfer excitons in two-dimensional WS2/tetracene heterostructures

    Science.gov (United States)

    Zhu, Tong; Yuan, Long; Zhao, Yan; Zhou, Mingwei; Wan, Yan; Mei, Jianguo; Huang, Libai

    2018-01-01

    Charge-transfer (CT) excitons at heterointerfaces play a critical role in light to electricity conversion using organic and nanostructured materials. However, how CT excitons migrate at these interfaces is poorly understood. We investigate the formation and transport of CT excitons in two-dimensional WS2/tetracene van der Waals heterostructures. Electron and hole transfer occurs on the time scale of a few picoseconds, and emission of interlayer CT excitons with a binding energy of ~0.3 eV has been observed. Transport of the CT excitons is directly measured by transient absorption microscopy, revealing coexistence of delocalized and localized states. Trapping-detrapping dynamics between the delocalized and localized states leads to stretched-exponential photoluminescence decay with an average lifetime of ~2 ns. The delocalized CT excitons are remarkably mobile with a diffusion constant of ~1 cm2 s−1. These highly mobile CT excitons could have important implications in achieving efficient charge separation. PMID:29340303

  20. A Monte Carlo software for the 1-dimensional simulation of IBIC experiments

    Energy Technology Data Exchange (ETDEWEB)

    Forneris, J., E-mail: jacopo.forneris@unito.it [Physics Department, NIS Centre and CNISM, University of Torino, INFN-sez. Torino, Via P. Giuria 1, 10125 Torino (Italy); Jakšić, M. [Ruđer Bošković Institute, Bijenička cesta 54, P.O. Box 180, 10002 Zagreb (Croatia); Pastuović, Ž. [Australian Nuclear Science and Technology Organization, Locked Bag 2001, Kirrawee DC, NSW 2234 (Australia); Vittone, E. [Physics Department, NIS Centre and CNISM, University of Torino, INFN-sez. Torino, Via P. Giuria 1, 10125 Torino (Italy)

    2014-08-01

    The ion beam induced charge (IBIC) microscopy is a valuable tool for the analysis of the electronic properties of semiconductors. In this work, a recently developed Monte Carlo approach for the simulation of IBIC experiments is presented along with a self-standing software equipped with graphical user interface. The method is based on the probabilistic interpretation of the excess charge carrier continuity equations and it offers to the end-user the full control not only of the physical properties ruling the induced charge formation mechanism (i.e., mobility, lifetime, electrostatics, device’s geometry), but also of the relevant experimental conditions (ionization profiles, beam dispersion, electronic noise) affecting the measurement of the IBIC pulses. Moreover, the software implements a novel model for the quantitative evaluation of the radiation damage effects on the charge collection efficiency degradation of ion-beam-irradiated devices. The reliability of the model implementation is then validated against a benchmark IBIC experiment.