WorldWideScience

Sample records for high dimensional experiment

  1. Three-Dimensional Triplet Tracking for LHC and Future High Rate Experiments

    CERN Document Server

    Schöning, Andre

    2014-10-20

    The hit combinatorial problem is a main challenge for track reconstruction and triggering at high rate experiments. At hadron colliders the dominant fraction of hits is due to low momentum tracks for which multiple scattering (MS) effects dominate the hit resolution. MS is also the dominating source for hit confusion and track uncertainties in low energy precision experiments. In all such environments, where MS dominates, track reconstruction and fitting can be largely simplified by using three-dimensional (3D) hit-triplets as provided by pixel detectors. This simplification is possible since track uncertainties are solely determined by MS if high precision spatial information is provided. Fitting of hit-triplets is especially simple for tracking detectors in solenoidal magnetic fields. The over-constrained 3D-triplet method provides a complete set of track parameters and is robust against fake hit combinations. The triplet method is ideally suited for pixel detectors where hits can be treated as 3D-space poi...

  2. High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.

    Science.gov (United States)

    Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton

    2017-11-03

    Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.

  3. Continuation of full-scale three-dimensional numerical experiments on high-intensity particle and laser beam-matter interactions

    Energy Technology Data Exchange (ETDEWEB)

    Mori, Warren, B.

    2012-12-01

    We present results from the grant entitled, Continuation of full-scale three-dimensional numerical experiments on high-intensity particle and laser beam-matter interactions. The research significantly advanced the understanding of basic high-energy density science (HEDS) on ultra intense laser and particle beam plasma interactions. This advancement in understanding was then used to to aid in the quest to make 1 GeV to 500 GeV plasma based accelerator stages. The work blended basic research with three-dimensions fully nonlinear and fully kinetic simulations including full-scale modeling of ongoing or planned experiments. The primary tool was three-dimensional particle-in-cell simulations. The simulations provided a test bed for theoretical ideas and models as well as a method to guide experiments. The research also included careful benchmarking of codes against experiment. High-fidelity full-scale modeling provided a means to extrapolate parameters into regimes that were not accessible to current or near term experiments, thereby allowing concepts to be tested with confidence before tens to hundreds of millions of dollars were spent building facilities. The research allowed the development of a hierarchy of PIC codes and diagnostics that is one of the most advanced in the world.

  4. Three-dimensional triplet tracking for LHC and future high rate experiments

    International Nuclear Information System (INIS)

    Schöning, A

    2014-01-01

    The hit combinatorial problem is a main challenge for track reconstruction and triggering at high rate experiments. At hadron colliders the dominant fraction of hits is due to low momentum tracks for which multiple scattering (MS) effects dominate the hit resolution. MS is also the dominating source for hit confusion and track uncertainties in low energy precision experiments. In all such environments, where MS dominates, track reconstruction and fitting can be largely simplified by using three-dimensional (3D) hit-triplets as provided by pixel detectors. This simplification is possible since track uncertainties are solely determined by MS if high precision spatial information is provided. Fitting of hit-triplets is especially simple for tracking detectors in solenoidal magnetic fields. The over-constrained 3D-triplet method provides a complete set of track parameters and is robust against fake hit combinations. Full tracks can be reconstructed step-wise by connecting hit triplet combinations from different layers, thus heavily reducing the combinatorial problem and accelerating track linking. The triplet method is ideally suited for pixel detectors where hits can be treated as 3D-space points. With the advent of relatively cheap and industrially available CMOS-sensors the construction of highly granular full scale pixel tracking detectors seems to be possible also for experiments at LHC or future high energy (hadron) colliders. In this paper tracking performance studies for full-scale pixel detectors, including their optimisation for 3D-triplet tracking, are presented. The results obtained for different types of tracker geometries and different reconstruction methods are compared. The potential of reducing the number of tracking layers and - along with that - the material budget using this new tracking concept is discussed. The possibility of using 3D-triplet tracking for triggering and fast online reconstruction is highlighted

  5. Mining High-Dimensional Data

    Science.gov (United States)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  6. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  7. Three-dimensional simulations of low foot and high foot implosion experiments on the National Ignition Facility

    International Nuclear Information System (INIS)

    Clark, D. S.; Weber, C. R.; Milovich, J. L.; Salmonson, J. D.; Kritcher, A. L.; Haan, S. W.; Hammel, B. A.; Hinkel, D. E.; Hurricane, O. A.; Jones, O. S.; Marinak, M. M.; Patel, P. K.; Robey, H. F.; Sepke, S. M.; Edwards, M. J.

    2016-01-01

    In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) [E. I. Moses et al., Phys. Plasmas 16, 041006 (2009)] require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensional (3D) character of the flow, accurately modeling NIF implosions remains at the edge of current simulation capabilities. This paper describes the current state of progress of 3D capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. For both implosion types, the simulations show reasonable, though not perfect, agreement with the data and suggest that a reliable predictive capability is developing to guide future implosions toward ignition.

  8. Three-dimensional simulations of low foot and high foot implosion experiments on the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Clark, D. S.; Weber, C. R.; Milovich, J. L.; Salmonson, J. D.; Kritcher, A. L.; Haan, S. W.; Hammel, B. A.; Hinkel, D. E.; Hurricane, O. A.; Jones, O. S.; Marinak, M. M.; Patel, P. K.; Robey, H. F.; Sepke, S. M.; Edwards, M. J. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94550 (United States)

    2016-05-15

    In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) [E. I. Moses et al., Phys. Plasmas 16, 041006 (2009)] require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensional (3D) character of the flow, accurately modeling NIF implosions remains at the edge of current simulation capabilities. This paper describes the current state of progress of 3D capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. For both implosion types, the simulations show reasonable, though not perfect, agreement with the data and suggest that a reliable predictive capability is developing to guide future implosions toward ignition.

  9. Clustering high dimensional data

    DEFF Research Database (Denmark)

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  10. High-Dimensional Metrics in R

    OpenAIRE

    Chernozhukov, Victor; Hansen, Chris; Spindler, Martin

    2016-01-01

    The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...

  11. HSM: Heterogeneous Subspace Mining in High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Seidl, Thomas

    2009-01-01

    Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...

  12. High-dimensional quantum cryptography with twisted light

    International Nuclear Information System (INIS)

    Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J

    2015-01-01

    Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)

  13. Distribution of high-dimensional entanglement via an intra-city free-space link.

    Science.gov (United States)

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-07-24

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.

  14. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  15. Comparison of electron cloud simulation and experiments in the high-current experiment

    International Nuclear Information System (INIS)

    Cohen, R.H.; Friedman, A.; Covo, M. Kireeff; Lund, S.M.; Molvik, A.W.; Bieniosek, F.M.; Seidl, P.A.; Vay, J.-L.; Verboncoeur, J.; Stoltz, P.; Veitzer, S.

    2004-01-01

    A set of experiments has been performed on the High-Current Experiment (HCX) facility at LBNL, in which the ion beam is allowed to collide with an end plate and thereby induce a copious supply of desorbed electrons. Through the use of combinations of biased and grounded electrodes positioned in between and downstream of the quadrupole magnets, the flow of electrons upstream into the magnets can be turned on or off. Properties of the resultant ion beam are measured under each condition. The experiment is modeled via a full three-dimensional, two species (electron and ion) particle simulation, as well as via reduced simulations (ions with appropriately chosen model electron cloud distributions, and a high-resolution simulation of the region adjacent to the end plate). The three-dimensional simulations are the first of their kind and the first to make use of a timestep-acceleration scheme that allows the electrons to be advanced with a timestep that is not small compared to the highest electron cyclotron period. The simulations reproduce qualitative aspects of the experiments, illustrate some unanticipated physical effects, and serve as an important demonstration of a developing simulation capability

  16. High-dimensional orbital angular momentum entanglement concentration based on Laguerre–Gaussian mode selection

    International Nuclear Information System (INIS)

    Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang

    2013-01-01

    Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)

  17. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    Science.gov (United States)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  18. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  19. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    International Nuclear Information System (INIS)

    Snyder, Abigail C.; Jiao, Yu

    2010-01-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  20. Highly conducting one-dimensional solids

    CERN Document Server

    Evrard, Roger; Doren, Victor

    1979-01-01

    Although the problem of a metal in one dimension has long been known to solid-state physicists, it was not until the synthesis of real one-dimensional or quasi-one-dimensional systems that this subject began to attract considerable attention. This has been due in part to the search for high­ temperature superconductivity and the possibility of reaching this goal with quasi-one-dimensional substances. A period of intense activity began in 1973 with the report of a measurement of an apparently divergent conduc­ tivity peak in TfF-TCNQ. Since then a great deal has been learned about quasi-one-dimensional conductors. The emphasis now has shifted from trying to find materials of very high conductivity to the many interesting problems of physics and chemistry involved. But many questions remain open and are still under active investigation. This book gives a review of the experimental as well as theoretical progress made in this field over the last years. All the chapters have been written by scientists who have ...

  1. High-resolution nuclear magnetic resonance measurements in inhomogeneous magnetic fields: A fast two-dimensional J-resolved experiment

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Yuqing; Cai, Shuhui; Yang, Yu; Sun, Huijun; Lin, Yanqin, E-mail: linyq@xmu.edu.cn, E-mail: chenz@xmu.edu.cn; Chen, Zhong, E-mail: linyq@xmu.edu.cn, E-mail: chenz@xmu.edu.cn [Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, State Key Laboratory for Physical Chemistry of Solid Surfaces, Xiamen University, Xiamen 361005 (China); Lin, Yung-Ya [Department of Chemistry and Biochemistry, University of California, Los Angeles, California 90095 (United States)

    2016-03-14

    High spectral resolution in nuclear magnetic resonance (NMR) is a prerequisite for achieving accurate information relevant to molecular structures and composition assignments. The continuous development of superconducting magnets guarantees strong and homogeneous static magnetic fields for satisfactory spectral resolution. However, there exist circumstances, such as measurements on biological tissues and heterogeneous chemical samples, where the field homogeneity is degraded and spectral line broadening seems inevitable. Here we propose an NMR method, named intermolecular zero-quantum coherence J-resolved spectroscopy (iZQC-JRES), to face the challenge of field inhomogeneity and obtain desired high-resolution two-dimensional J-resolved spectra with fast acquisition. Theoretical analyses for this method are given according to the intermolecular multiple-quantum coherence treatment. Experiments on (a) a simple chemical solution and (b) an aqueous solution of mixed metabolites under externally deshimmed fields, and on (c) a table grape sample with intrinsic field inhomogeneity from magnetic susceptibility variations demonstrate the feasibility and applicability of the iZQC-JRES method. The application of this method to inhomogeneous chemical and biological samples, maybe in vivo samples, appears promising.

  2. Reduced dimensionality (3,2)D NMR experiments and their automated analysis: implications to high-throughput structural studies on proteins.

    Science.gov (United States)

    Reddy, Jithender G; Kumar, Dinesh; Hosur, Ramakrishna V

    2015-02-01

    Protein NMR spectroscopy has expanded dramatically over the last decade into a powerful tool for the study of their structure, dynamics, and interactions. The primary requirement for all such investigations is sequence-specific resonance assignment. The demand now is to obtain this information as rapidly as possible and in all types of protein systems, stable/unstable, soluble/insoluble, small/big, structured/unstructured, and so on. In this context, we introduce here two reduced dimensionality experiments – (3,2)D-hNCOcanH and (3,2)D-hNcoCAnH – which enhance the previously described 2D NMR-based assignment methods quite significantly. Both the experiments can be recorded in just about 2-3 h each and hence would be of immense value for high-throughput structural proteomics and drug discovery research. The applicability of the method has been demonstrated using alpha-helical bovine apo calbindin-D9k P43M mutant (75 aa) protein. Automated assignment of this data using AUTOBA has been presented, which enhances the utility of these experiments. The backbone resonance assignments so derived are utilized to estimate secondary structures and the backbone fold using Web-based algorithms. Taken together, we believe that the method and the protocol proposed here can be used for routine high-throughput structural studies of proteins. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Dimensional analysis of small-scale steam explosion experiments

    International Nuclear Information System (INIS)

    Huh, K.; Corradini, M.L.

    1986-01-01

    Dimensional analysis applied to Nelson's small-scale steam explosion experiments to determine the qualitative effect of each relevant parameter for triggering a steam explosion. According to experimental results, the liquid entrapment model seems to be a consistent explanation for the steam explosion triggering mechanism. The three-dimensional oscillatory wave motion of the vapor/liquid interface is analyzed to determine the necessary conditions for local condensation and production of a coolant microjet to be entrapped in fuel. It is proposed that different contact modes between fuel and coolant may involve different initiation mechanisms of steam explosions

  4. GOTCHA experience report: three-dimensional SAR imaging with complete circular apertures

    Science.gov (United States)

    Ertin, Emre; Austin, Christian D.; Sharma, Samir; Moses, Randolph L.; Potter, Lee C.

    2007-04-01

    We study circular synthetic aperture radar (CSAR) systems collecting radar backscatter measurements over a complete circular aperture of 360 degrees. This study is motivated by the GOTCHA CSAR data collection experiment conducted by the Air Force Research Laboratory (AFRL). Circular SAR provides wide-angle information about the anisotropic reflectivity of the scattering centers in the scene, and also provides three dimensional information about the location of the scattering centers due to a non planar collection geometry. Three dimensional imaging results with single pass circular SAR data reveals that the 3D resolution of the system is poor due to the limited persistence of the reflectors in the scene. We present results on polarimetric processing of CSAR data and illustrate reasoning of three dimensional shape from multi-view layover using prior information about target scattering mechanisms. Next, we discuss processing of multipass (CSAR) data and present volumetric imaging results with IFSAR and three dimensional backprojection techniques on the GOTCHA data set. We observe that the volumetric imaging with GOTCHA data is degraded by aliasing and high sidelobes due to nonlinear flightpaths and sparse and unequal sampling in elevation. We conclude with a model based technique that resolves target features and enhances the volumetric imagery by extrapolating the phase history data using the estimated model.

  5. High-dimensional change-point estimation: Combining filtering with convex optimization

    OpenAIRE

    Soh, Yong Sheng; Chandrasekaran, Venkat

    2017-01-01

    We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...

  6. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  7. High dimensional neurocomputing growth, appraisal and applications

    CERN Document Server

    Tripathi, Bipin Kumar

    2015-01-01

    The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...

  8. Numerical analysis of biological clogging in two-dimensional sand box experiments

    DEFF Research Database (Denmark)

    Kildsgaard, J.; Engesgaard, Peter Knudegaard

    2001-01-01

    Two-dimensional models for biological clogging and sorptive tracer transport were used to study the progress of clogging in a sand box experiment. The sand box had been inoculated with a strip of bacteria and exposed to a continuous injection of nitrate and acetate. Brilliant Blue was regularly...... injected during the clogging experiment and digital images of the tracer movement had been converted to concentration maps using an image analysis. The calibration of the models to the Brilliant Blue observations shows that Brilliant Blue has a solid biomass dependent sorption that is not compliant...... with the assumed linear constant Kd behaviour. It is demonstrated that the dimensionality of sand box experiments in comparison to column experiments results in a much lower reduction in hydraulic conductivity Žfactor of 100. and that the bulk hydraulic conductivity of the sand box decreased only slightly. However...

  9. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  10. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  11. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    Science.gov (United States)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  12. Dimensional measurement of micro parts with high aspect ratio in HIT-UOI

    Science.gov (United States)

    Dang, Hong; Cui, Jiwen; Feng, Kunpeng; Li, Junying; Zhao, Shiyuan; Zhang, Haoran; Tan, Jiubin

    2016-11-01

    Micro parts with high aspect ratios have been widely used in different fields including aerospace and defense industries, while the dimensional measurement of these micro parts becomes a challenge in the field of precision measurement and instrument. To deal with this contradiction, several probes for the micro parts precision measurement have been proposed by researchers in Center of Ultra-precision Optoelectronic Instrument (UOI), Harbin Institute of Technology (HIT). In this paper, optical fiber probes with structures of spherical coupling(SC) with double optical fibers, micro focal-length collimation (MFL-collimation) and fiber Bragg grating (FBG) are described in detail. After introducing the sensing principles, both advantages and disadvantages of these probes are analyzed respectively. In order to improve the performances of these probes, several approaches are proposed. A two-dimensional orthogonal path arrangement is propounded to enhance the dimensional measurement ability of MFL-collimation probes, while a high resolution and response speed interrogation method based on differential method is used to improve the accuracy and dynamic characteristics of the FBG probes. The experiments for these special structural fiber probes are given with a focus on the characteristics of these probes, and engineering applications will also be presented to prove the availability of them. In order to improve the accuracy and the instantaneity of the engineering applications, several techniques are used in probe integration. The effectiveness of these fiber probes were therefore verified through both the analysis and experiments.

  13. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  14. Engineering two-photon high-dimensional states through quantum interference

    Science.gov (United States)

    Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew

    2016-01-01

    Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685

  15. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    Science.gov (United States)

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  16. Three-Dimensional Neutral Transport Simulations of Gas Puff Imaging Experiments

    International Nuclear Information System (INIS)

    Stotler, D.P.; DIppolito, D.A.; LeBlanc, B.; Maqueda, R.J.; Myra, J.R.; Sabbagh, S.A.; Zweben, S.J.

    2003-01-01

    Gas Puff Imaging (GPI) experiments are designed to isolate the structure of plasma turbulence in the plane perpendicular to the magnetic field. Three-dimensional aspects of this diagnostic technique as used on the National Spherical Torus eXperiment (NSTX) are examined via Monte Carlo neutral transport simulations. The radial width of the simulated GPI images are in rough agreement with observations. However, the simulated emission clouds are angled approximately 15 degrees with respect to the experimental images. The simulations indicate that the finite extent of the gas puff along the viewing direction does not significantly degrade the radial resolution of the diagnostic. These simulations also yield effective neutral density data that can be used in an approximate attempt to infer two-dimensional electron density and temperature profiles from the experimental images

  17. On the sensitivity of dimensional stability of high density polyethylene on heating rate

    Directory of Open Access Journals (Sweden)

    2007-02-01

    Full Text Available Although high density polyethylene (HDPE is one of the most widely used industrial polymers, its application compared to its potential has been limited because of its low dimensional stability particularly at high temperature. Dilatometry test is considered as a method for examining thermal dimensional stability (TDS of the material. In spite of the importance of simulation of TDS of HDPE during dilatometry test it has not been paid attention by other investigators. Thus the main goal of this research is concentrated on simulation of TDS of HDPE. Also it has been tried to validate the simulation results and practical experiments. For this purpose the standard dilatometry test was done on the HDPE speci­mens. Secant coefficient of linear thermal expansion was computed from the test. Then by considering boundary conditions and material properties, dilatometry test has been simulated at different heating rates and the thermal strain versus temper­ature was calculated. The results showed that the simulation results and practical experiments were very close together.

  18. Detailed high-resolution three-dimensional simulations of OMEGA separated reactants inertial confinement fusion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Haines, Brian M., E-mail: bmhaines@lanl.gov; Fincke, James R.; Shah, Rahul C.; Boswell, Melissa; Fowler, Malcolm M.; Gore, Robert A.; Hayes-Sterbenz, Anna C.; Jungman, Gerard; Klein, Andreas; Rundberg, Robert S.; Steinkamp, Michael J.; Wilhelmy, Jerry B. [Los Alamos National Laboratory, MS T087, Los Alamos, New Mexico 87545 (United States); Grim, Gary P. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Forrest, Chad J.; Silverstein, Kevin; Marshall, Frederic J. [Laboratory for Laser Energetics, University of Rochester, Rochester, New York 14623 (United States)

    2016-07-15

    We present results from the comparison of high-resolution three-dimensional (3D) simulations with data from the implosions of inertial confinement fusion capsules with separated reactants performed on the OMEGA laser facility. Each capsule, referred to as a “CD Mixcap,” is filled with tritium and has a polystyrene (CH) shell with a deuterated polystyrene (CD) layer whose burial depth is varied. In these implosions, fusion reactions between deuterium and tritium ions can occur only in the presence of atomic mix between the gas fill and shell material. The simulations feature accurate models for all known experimental asymmetries and do not employ any adjustable parameters to improve agreement with experimental data. Simulations are performed with the RAGE radiation-hydrodynamics code using an Implicit Large Eddy Simulation (ILES) strategy for the hydrodynamics. We obtain good agreement with the experimental data, including the DT/TT neutron yield ratios used to diagnose mix, for all burial depths of the deuterated shell layer. Additionally, simulations demonstrate good agreement with converged simulations employing explicit models for plasma diffusion and viscosity, suggesting that the implicit sub-grid model used in ILES is sufficient to model these processes in these experiments. In our simulations, mixing is driven by short-wavelength asymmetries and longer-wavelength features are responsible for developing flows that transport mixed material towards the center of the hot spot. Mix material transported by this process is responsible for most of the mix (DT) yield even for the capsule with a CD layer adjacent to the tritium fuel. Consistent with our previous results, mix does not play a significant role in TT neutron yield degradation; instead, this is dominated by the displacement of fuel from the center of the implosion due to the development of turbulent instabilities seeded by long-wavelength asymmetries. Through these processes, the long

  19. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  20. HDclassif : An R Package for Model-Based Clustering and Discriminant Analysis of High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Laurent Berge

    2012-01-01

    Full Text Available This paper presents the R package HDclassif which is devoted to the clustering and the discriminant analysis of high-dimensional data. The classification methods proposed in the package result from a new parametrization of the Gaussian mixture model which combines the idea of dimension reduction and model constraints on the covariance matrices. The supervised classification method using this parametrization is called high dimensional discriminant analysis (HDDA. In a similar manner, the associated clustering method iscalled high dimensional data clustering (HDDC and uses the expectation-maximization algorithm for inference. In order to correctly t the data, both methods estimate the specific subspace and the intrinsic dimension of the groups. Due to the constraints on the covariance matrices, the number of parameters to estimate is significantly lower than other model-based methods and this allows the methods to be stable and efficient in high dimensions. Two introductory examples illustrated with R codes allow the user to discover the hdda and hddc functions. Experiments on simulated and real datasets also compare HDDC and HDDA with existing classification methods on high-dimensional datasets. HDclassif is a free software and distributed under the general public license, as part of the R software project.

  1. High dimensional entanglement

    CSIR Research Space (South Africa)

    Mc

    2012-07-01

    Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...

  2. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  3. AN EFFECTIVE MULTI-CLUSTERING ANONYMIZATION APPROACH USING DISCRETE COMPONENT TASK FOR NON-BINARY HIGH DIMENSIONAL DATA SPACES

    Directory of Open Access Journals (Sweden)

    L.V. Arun Shalin

    2016-01-01

    Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization

  4. TripAdvisor^{N-D}: A Tourism-Inspired High-Dimensional Space Exploration Framework with Overview and Detail.

    Science.gov (United States)

    Nam, Julia EunJu; Mueller, Klaus

    2013-02-01

    Gaining a true appreciation of high-dimensional space remains difficult since all of the existing high-dimensional space exploration techniques serialize the space travel in some way. This is not so foreign to us since we, when traveling, also experience the world in a serial fashion. But we typically have access to a map to help with positioning, orientation, navigation, and trip planning. Here, we propose a multivariate data exploration tool that compares high-dimensional space navigation with a sightseeing trip. It decomposes this activity into five major tasks: 1) Identify the sights: use a map to identify the sights of interest and their location; 2) Plan the trip: connect the sights of interest along a specifyable path; 3) Go on the trip: travel along the route; 4) Hop off the bus: experience the location, look around, zoom into detail; and 5) Orient and localize: regain bearings in the map. We describe intuitive and interactive tools for all of these tasks, both global navigation within the map and local exploration of the data distributions. For the latter, we describe a polygonal touchpad interface which enables users to smoothly tilt the projection plane in high-dimensional space to produce multivariate scatterplots that best convey the data relationships under investigation. Motion parallax and illustrative motion trails aid in the perception of these transient patterns. We describe the use of our system within two applications: 1) the exploratory discovery of data configurations that best fit a personal preference in the presence of tradeoffs and 2) interactive cluster analysis via cluster sculpting in N-D.

  5. Clustering high dimensional data using RIA

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.

  6. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  7. Three-dimensional simulations of Nova capsule implosion experiments

    International Nuclear Information System (INIS)

    Marinak, M.M.; Tipton, R.E.; Landen, O.L.

    1995-01-01

    Capsule implosion experiments carried out on the Nova laser are simulated with the three-dimensional HYDRA radiation hydrodynamics code. Simulations of ordered near single mode perturbations indicate that structures which evolve into round spikes can penetrate farthest into the hot spot. Bubble-shaped perturbations can burn through the capsule shell fastest, however, causing even more damage. Simulations of a capsule with multimode perturbations shows spike amplitudes evolving in good agreement with a saturation model during the deceleration phase. The presence of sizable low mode asymmetry, caused either by drive asymmetry or perturbations in the capsule shell, can dramatically affect the manner in which spikes approach the center of the hot spot. Three-dimensional coupling between the low mode shell perturbations intrinsic to Nova capsules and the drive asymmetry brings the simulated yields into closer agreement with the experimental values

  8. Experiments with three-dimensional riblets as an idealized model of shark skin

    Energy Technology Data Exchange (ETDEWEB)

    Bechert, D.W.; Bruse, M.; Hage, W. [DLR Deutsches Zentrum fuer Luft- und Raumfahrt e.V., Berlin (Germany). Dept. of Turbulence Res.

    2000-05-01

    The skin of fast sharks exhibits a rather intriguing three-dimensional rib pattern. Therefore, the question arises whether or not such three-dimensional riblet surfaces may produce an equivalent or even higher drag reduction than straight two-dimensional riblets. Previously, the latter have been shown to reduce turbulent wall shear stress by up to 10%. Hence, the drag reduction by three-dimensional riblet surfaces is investigated experimentally. Our idealized 3D-surface consists of sharp-edged fin-shaped elements arranged in an interlocking array. The turbulent wall shear stress on this surface is measured using direct force balances. In a first attempt, wind tunnel experiments with about 365000 tiny fin elements per test surface have been carried out. Due to the complexity of the surface manufacturing process, a comprehensive parametric study was not possible. These initial wind tunnel data, however, hinted at an appreciable drag reduction. Subsequently, in order to have a better judgement on the potential of these 3D-surfaces, oil channel experiments are carried out. In our new oil channel, the geometrical dimensions of the fins can be magnified 10 times in size as compared to the initial wind tunnel experiments, i.e., from typically 0.5 mm to 5 mm. For these latter oil channel experiments, novel test plates with variable fin configuration have been manufactured, with 1920-4000 fins. This enhanced variability permits measurements with a comparatively large parameter range. As a result of our measurements, it can be concluded, that 3D-riblet surfaces do indeed produce an appreciable drag reduction. We found as much as 7.3% decreased turbulent shear stress, as compared to a smooth reference plate.

  9. Application of the three-dimensional transport code to analysis of the neutron streaming experiment

    International Nuclear Information System (INIS)

    Chatani, K.; Slater, C.O.

    1990-01-01

    The neutron streaming through an experimental mock-up of a Clinch River Breeder Reactor (CRBR) prototypic coolant pipe chaseway was recalculated with a three-dimensional discrete ordinates code. The experiment was conducted at the Tower Shielding Facility at Oak Ridge National Laboratory in 1976 and 1977. The measurement of the neutron flux, using Bonner ball detectors, indicated nine orders of attenuation in the empty pipeway, which contained two 90-deg bends and was surrounded by concrete walls. The measurement data were originally analyzed using the DOT3.5 two-dimensional discrete ordinates radiation transport code. However, the results did not agree with measurement data at the bend because of the difficulties in modeling the three-dimensional configurations using two-dimensional methods. The two-dimensional calculations used a three-step procedure in which each of the three legs making the two 90-deg bends was a separate calculation. The experiment was recently analyzed with the TORT three-dimensional discrete ordinates radiation transport code, not only to compare the calculational results with the experimental results, but also to compare with results obtained from analyses in Japan using DOT3.5, MORSE, and ENSEMBLE, which is a three-dimensional discrete ordinates radiation transport code developed in Japan

  10. Assessment of wall friction model in multi-dimensional component of MARS with air–water cross flow experiment

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jin-Hwa [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Korea Atomic Energy Research Institute, 989-111, Daedeok-daero, Yuseong-gu, Daejeon 305-600 (Korea, Republic of); Choi, Chi-Jin [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Cho, Hyoung-Kyu, E-mail: chohk@snu.ac.kr [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Euh, Dong-Jin [Korea Atomic Energy Research Institute, 989-111, Daedeok-daero, Yuseong-gu, Daejeon 305-600 (Korea, Republic of); Park, Goon-Cherl [Nuclear Thermal-Hydraulic Engineering Laboratory, Seoul National University, Gwanak 599, Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of)

    2017-02-15

    Recently, high precision and high accuracy analysis on multi-dimensional thermal hydraulic phenomena in a nuclear power plant has been considered as state-of-the-art issues. System analysis code, MARS, also adopted a multi-dimensional module to simulate them more accurately. Even though it was applied to represent the multi-dimensional phenomena, but implemented models and correlations in that are one-dimensional empirical ones based on one-dimensional pipe experimental results. Prior to the application of the multi-dimensional simulation tools, however, the constitutive models for a two-phase flow need to be carefully validated, such as the wall friction model. Especially, in a Direct Vessel Injection (DVI) system, the injected emergency core coolant (ECC) on the upper part of the downcomer interacts with the lateral steam flow during the reflood phase in the Large-Break Loss-Of-Coolant-Accident (LBLOCA). The interaction between the falling film and lateral steam flow induces a multi-dimensional two-phase flow. The prediction of ECC flow behavior plays a key role in determining the amount of coolant that can be used as core cooling. Therefore, the wall friction model which is implemented to simulate the multi-dimensional phenomena should be assessed by multidimensional experimental results. In this paper, the air–water cross film flow experiments simulating the multi-dimensional phenomenon in upper part of downcomer as a conceptual problem will be introduced. The two-dimensional local liquid film velocity and thickness data were used as benchmark data for code assessment. And then the previous wall friction model of the MARS-MultiD in the annular flow regime was modified. As a result, the modified MARS-MultiD produced improved calculation result than previous one.

  11. CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Malgorzata Nowicka

    2017-05-01

    Full Text Available High dimensional mass and flow cytometry (HDCyto experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots, reporting of clustering results (dimensionality reduction, heatmaps with dendrograms and differential analyses (e.g. plots of aggregated signals.

  12. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  13. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando

    2017-01-01

    aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel

  14. TSAR: a program for automatic resonance assignment using 2D cross-sections of high dimensionality, high-resolution spectra

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor [University of Warsaw, Faculty of Chemistry (Poland); Billeter, Martin, E-mail: martin.billeter@chem.gu.se [University of Gothenburg, Biophysics Group, Department of Chemistry and Molecular Biology (Sweden)

    2012-09-15

    While NMR studies of proteins typically aim at structure, dynamics or interactions, resonance assignments represent in almost all cases the initial step of the analysis. With increasing complexity of the NMR spectra, for example due to decreasing extent of ordered structure, this task often becomes both difficult and time-consuming, and the recording of high-dimensional data with high-resolution may be essential. Random sampling of the evolution time space, combined with sparse multidimensional Fourier transform (SMFT), allows for efficient recording of very high dimensional spectra ({>=}4 dimensions) while maintaining high resolution. However, the nature of this data demands for automation of the assignment process. Here we present the program TSAR (Tool for SMFT-based Assignment of Resonances), which exploits all advantages of SMFT input. Moreover, its flexibility allows to process data from any type of experiments that provide sequential connectivities. The algorithm was tested on several protein samples, including a disordered 81-residue fragment of the {delta} subunit of RNA polymerase from Bacillus subtilis containing various repetitive sequences. For our test examples, TSAR achieves a high percentage of assigned residues without any erroneous assignments.

  15. Analysis of the OPERA-15 two-dimensional voiding experiment using the SAS4A code

    International Nuclear Information System (INIS)

    Briggs, L.L.

    1984-01-01

    Overall, SAS4A appears to do a good job for simulating the OPERA-15 experiment. For most of the experiment parameters, the code calculations compare quite well with the experimental data. The lack of a multi-dimensional voiding model has the effect of extending the flow coastdown time until voiding starts; otherwise, the code simulates the accident progression satisfactorily. These results indicate a need for further work in this area in the form of a tandem analysis by a two-dimensional flow code and a one-dimensional version of that code to confirm the observations derived from the SAS4A analysis

  16. A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.

    Science.gov (United States)

    Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H

    2016-06-01

    Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost.

  17. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  18. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  19. Three-dimensional Simulation of Gas Conductance Measurement Experiments on Alcator C-Mod

    International Nuclear Information System (INIS)

    Stotler, D.P.; LaBombard, B.

    2004-01-01

    Three-dimensional Monte Carlo neutral transport simulations of gas flow through the Alcator C-Mod subdivertor yield conductances comparable to those found in dedicated experiments. All are significantly smaller than the conductance found with the previously used axisymmetric geometry. A benchmarking exercise of the code against known conductance values for gas flow through a simple pipe provides a physical basis for interpreting the comparison of the three-dimensional and experimental C-Mod conductances

  20. Quantitative study of quasi-one-dimensional Bose gas experiments via the stochastic Gross-Pitaevskii equation

    International Nuclear Information System (INIS)

    Cockburn, S. P.; Gallucci, D.; Proukakis, N. P.

    2011-01-01

    The stochastic Gross-Pitaevskii equation is shown to be an excellent model for quasi-one-dimensional Bose gas experiments, accurately reproducing the in situ density profiles recently obtained in the experiments of Trebbia et al.[Phys. Rev. Lett. 97, 250403 (2006)] and van Amerongen et al.[Phys. Rev. Lett. 100, 090402 (2008)] and the density fluctuation data reported by Armijo et al.[Phys. Rev. Lett. 105, 230402 (2010)]. To facilitate such agreement, we propose and implement a quasi-one-dimensional extension to the one-dimensional stochastic Gross-Pitaevskii equation for the low-energy, axial modes, while atoms in excited transverse modes are treated as independent ideal Bose gases.

  1. Matrix correlations for high-dimensional data: The modified RV-coefficient

    NARCIS (Netherlands)

    Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van

    2009-01-01

    Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they

  2. Miniature robust five-dimensional fingertip force/torque sensor with high performance

    International Nuclear Information System (INIS)

    Liang, Qiaokang; Huang, Xiuxiang; Li, Zhongyang; Zhang, Dan; Ge, Yunjian

    2011-01-01

    This paper proposes an innovative design and investigation for a five-dimensional fingertip force/torque sensor with a dual annular diaphragm. This sensor can be applied to a robot hand to measure forces along the X-, Y- and Z-axes (F x , F y and F z ) and moments about the X- and Y-axes (M x and M y ) simultaneously. Particularly, the details of the sensing principle, the structural design and the overload protection mechanism are presented. Afterward, based on the design of experiments approach provided by the software ANSYS®, a finite element analysis and an optimization design are performed. These are performed with the objective of achieving both high sensitivity and stiffness of the sensor. Furthermore, static and dynamic calibrations based on the neural network method are carried out. Finally, an application of the developed sensor on a dexterous robot hand is demonstrated. The results of calibration experiments and the application show that the developed sensor possesses high performance and robustness

  3. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)

    2015-01-15

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  4. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    Science.gov (United States)

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  5. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data

  6. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Hongchao Song

    2017-01-01

    Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  7. High-dimensional quantum cloning and applications to quantum hacking.

    Science.gov (United States)

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim

    2017-02-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.

  8. Sensitivity experiments with a one-dimensional coupled plume - iceflow model

    Science.gov (United States)

    Beckmann, Johanna; Perette, Mahé; Alexander, David; Calov, Reinhard; Ganopolski, Andrey

    2016-04-01

    Over the last few decades Greenland Ice sheet mass balance has become increasingly negative, caused by enhanced surface melting and speedup of the marine-terminating outlet glaciers at the ice sheet margins. Glaciers speedup has been related, among other factors, to enhanced submarine melting, which in turn is caused by warming of the surrounding ocean and less obviously, by increased subglacial discharge. While ice-ocean processes potentially play an important role in recent and future mass balance changes of the Greenland Ice Sheet, their physical understanding remains poorly understood. In this work we performed numerical experiments with a one-dimensional plume model coupled to a one-dimensional iceflow model. First we investigated the sensitivity of submarine melt rate to changes in ocean properties (ocean temperature and salinity), to the amount of subglacial discharge and to the glacier's tongue geometry itself. A second set of experiments investigates the response of the coupled model, i.e. the dynamical response of the outlet glacier to altered submarine melt, which results in new glacier geometry and updated melt rates.

  9. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    Science.gov (United States)

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  10. Non-dimensional scaling of impact fast ignition experiments

    International Nuclear Information System (INIS)

    Farley, D R; Shigemori, K; Murakami, M; Azechi, H

    2008-01-01

    Recent experiments at the Osaka University Institute for Laser Engineering (ILE) showed that 'Impact Fast Ignition' (IFI) could increase the neutron yield of inertial fusion targets by two orders of magnitude [1]. IFI utilizes the thermal and kinetic energy of a laser-accelerated disk to impact an imploded fusion target. ILE researchers estimate a disk velocity of 10 8 cm/sec is needed to ignite the fusion target [2]. To be able to study the IFI concept using lasers different from that at ILE, appropriate non-dimensionalization of the flow should be done. Analysis of the rocket equation gives parameters needed for producing similar IFI results with different lasers. This analysis shows that a variety of laboratory-scale commercial lasers could produce results useful to full-scale ILE experiments

  11. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  12. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....

  13. High dimensional biological data retrieval optimization with NoSQL technology

    Science.gov (United States)

    2014-01-01

    Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data

  14. High dimensional biological data retrieval optimization with NoSQL technology.

    Science.gov (United States)

    Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike

    2014-01-01

    High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating

  15. Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations

    Science.gov (United States)

    Garrett, Karen A.; Allison, David B.

    2015-01-01

    Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106

  16. Challenges and approaches to statistical design and inference in high-dimensional investigations.

    Science.gov (United States)

    Gadbury, Gary L; Garrett, Karen A; Allison, David B

    2009-01-01

    Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.

  17. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    Science.gov (United States)

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  18. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  19. Small angle X-ray scattering experiments with three-dimensional imaging gas detectors

    International Nuclear Information System (INIS)

    La Monaca, A.; Iannuzzi, M.; Messi, R.

    1985-01-01

    Measurements of small angle X-ray scattering of lupolen - R, dry collagen and dry cornea are presented. The experiments have been performed with synchrotron radiation and a new three-dimensional imaging drif-chamber gas detector

  20. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  1. Five-dimensional Myers-Perry black holes cannot be overspun in gedanken experiments

    Science.gov (United States)

    An, Jincheng; Shan, Jieru; Zhang, Hongbao; Zhao, Suting

    2018-05-01

    We apply the new version of a gedanken experiment designed recently by Sorce and Wald to overspin the five-dimensional Myers-Perry black holes. As a result, the extremal black holes cannot be overspun at the linear order. On the other hand, although the nearly extremal black holes could be overspun at the linear order, this process is shown to be prohibited by the quadratic order correction. Thus, no violation of the weak cosmic censorship conjecture occurs around the five-dimensional Myers-Perry black holes.

  2. Two-dimensional NMR spectrometry

    International Nuclear Information System (INIS)

    Farrar, T.C.

    1987-01-01

    This article is the second in a two-part series. In part one (ANALYTICAL CHEMISTRY, May 15) the authors discussed one-dimensional nuclear magnetic resonance (NMR) spectra and some relatively advanced nuclear spin gymnastics experiments that provide a capability for selective sensitivity enhancements. In this article and overview and some applications of two-dimensional NMR experiments are presented. These powerful experiments are important complements to the one-dimensional experiments. As in the more sophisticated one-dimensional experiments, the two-dimensional experiments involve three distinct time periods: a preparation period, t 0 ; an evolution period, t 1 ; and a detection period, t 2

  3. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    International Nuclear Information System (INIS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-01-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  4. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Science.gov (United States)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  5. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  6. Introduction to the conformational investigation of peptides and proteins by using two-dimensional proton NMR experiments

    International Nuclear Information System (INIS)

    Neumann, J.M.; Macquaire, F.

    1991-01-01

    This report presents the elementary bases for an initiation to the conformational study of peptides and proteins by using two-dimensional proton NMR experiments. First, some general features of protein structures are summarized. A second chapter is devoted to the basic NMR experiments and to the spectral parameters which provide a structural information. This description is illustrated by NMR spectra of peptides. The third chapter concerns the most standard two-dimensional proton NMR experiments and their use for a conformational study of peptides and proteins. Lastly, an example of NMR structural investigation of a peptide is reported [fr

  7. High-resolution non-destructive three-dimensional imaging of integrated circuits.

    Science.gov (United States)

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H R; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-15

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography-a high-resolution coherent diffractive imaging technique-can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  8. High-resolution non-destructive three-dimensional imaging of integrated circuits

    Science.gov (United States)

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H. R.; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-01

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography—a high-resolution coherent diffractive imaging technique—can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  9. Model-based Clustering of High-Dimensional Data in Astrophysics

    Science.gov (United States)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  10. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  11. Optimized set of two-dimensional experiments for fast sequential assignment, secondary structure determination, and backbone fold validation of 13C/15N-labelled proteins

    International Nuclear Information System (INIS)

    Bersch, Beate; Rossy, Emmanuel; Coves, Jacques; Brutscher, Bernhard

    2003-01-01

    NMR experiments are presented which allow backbone resonance assignment, secondary structure identification, and in favorable cases also molecular fold topology determination from a series of two-dimensional 1 H- 15 N HSQC-like spectra. The 1 H- 15 N correlation peaks are frequency shifted by an amount ± ω X along the 15 N dimension, where ω X is the C α , C β , or H α frequency of the same or the preceding residue. Because of the low dimensionality (2D) of the experiments, high-resolution spectra are obtained in a short overall experimental time. The whole series of seven experiments can be performed in typically less than one day. This approach significantly reduces experimental time when compared to the standard 3D-based methods. The here presented methodology is thus especially appealing in the context of high-throughput NMR studies of protein structure, dynamics or molecular interfaces

  12. Analysis of chaos in high-dimensional wind power system.

    Science.gov (United States)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  13. High-dimensional data in economics and their (robust) analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  14. Computer experiments on dynamical cloud and space time fluctuations in one-dimensional meta-equilibrium plasmas

    International Nuclear Information System (INIS)

    Rouet, J.L.; Feix, M.R.

    1996-01-01

    The test particle picture is a central theory of weakly correlated plasma. While experiments and computer experiments have confirmed the validity of this theory at thermal equilibrium, the extension to meta-equilibrium distributions presents interesting and intriguing points connected to the under or over-population of the tail of these distributions (high velocity) which have not yet been tested. Moreover, the general dynamical Debye cloud (which is a generalization of the static Debye cloud supposing a plasma at thermal equilibrium and a test particle of zero velocity) for any test particle velocity and three typical velocity distributions (equilibrium plus two meta-equilibriums) are presented. The simulations deal with a one-dimensional two-component plasma and, moreover, the relevance of the check for real three-dimensional plasma is outlined. Two kinds of results are presented: the dynamical cloud itself and the more usual density (or energy) fluctuation spectrums. Special attention is paid to the behavior of long wavelengths which needs long systems with very small graininess effects and, consequently, sizable computation efforts. Finally, the divergence or absence of energy in the small wave numbers connected to the excess or lack of fast particles of the two above mentioned meta-equilibrium is exhibited. copyright 1996 American Institute of Physics

  15. Disruption simulation experiment using high-frequency rastering electron beam as the heat source

    International Nuclear Information System (INIS)

    Yamazaki, S.; Seki, M.

    1987-01-01

    The disruption is a serious event which possibly reduces the lifetime of plasm interactive components, so the effects of the resulting high heat flux on the wall materials must be clearly identified. The authors performed disruption simulation experiments to investigate melting, evaporation, and crack initiation behaviors using an electron beam facility as the heat source. The facility was improved with a high-frequency beam rastering system which provided spatially and temporally uniform heat flux on wider test surfaces. Along with the experiments, thermal and mechanical analyses were also performed. A two-dimensional disruption thermal analysis code (DREAM) was developed for the analyses

  16. Characterization of 3-dimensional superconductive thin film components for gravitational experiments in space

    Energy Technology Data Exchange (ETDEWEB)

    Hechler, S.; Nawrodt, R.; Nietzsche, S.; Vodel, W.; Seidel, P. [Friedrich-Schiller-Univ. Jena (Germany). Inst. fuer Festkoerperphysik; Dittus, H. [ZARM, Univ. Bremen (Germany); Loeffler, F. [Physikalisch-Technische Bundesanstalt, Braunschweig (Germany)

    2007-07-01

    Superconducting quantum interference devices (SQUIDs) are used for high precise gravitational experiments. One of the most impressive experiments is the satellite test of the equivalence principle (STEP) of NASA/ESA. The STEP mission aims to prove a possible violation of Einstein's equivalence principle at an extreme level of accuracy of 1 part in 10{sup 18} in space. In this contribution we present an automatically working measurement equipment to characterize 3-dimensional superconducting thin film components like i.e. pick-up coils and test masses for STEP. The characterization is done by measurements of the transition temperature between the normal and the superconducting state using a special built anti-cryostat. Above all the setup was designed for use in normal LHe transport Dewars. The sample chamber has a volume of 150 cm{sup 3} and can be fully temperature controlled over a range from 4.2 K to 300 K with a resolution of better then 100 mK. (orig.)

  17. High-dimensional Data in Economics and their (Robust) Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  18. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  19. Multi-SOM: an Algorithm for High-Dimensional, Small Size Datasets

    Directory of Open Access Journals (Sweden)

    Shen Lu

    2013-04-01

    Full Text Available Since it takes time to do experiments in bioinformatics, biological datasets are sometimes small but with high dimensionality. From probability theory, in order to discover knowledge from a set of data, we have to have a sufficient number of samples. Otherwise, the error bounds can become too large to be useful. For the SOM (Self- Organizing Map algorithm, the initial map is based on the training data. In order to avoid the bias caused by the insufficient training data, in this paper we present an algorithm, called Multi-SOM. Multi-SOM builds a number of small self-organizing maps, instead of just one big map. Bayesian decision theory is used to make the final decision among similar neurons on different maps. In this way, we can better ensure that we can get a real random initial weight vector set, the map size is less of consideration and errors tend to average out. In our experiments as applied to microarray datasets which are highly intense data composed of genetic related information, the precision of Multi-SOMs is 10.58% greater than SOMs, and its recall is 11.07% greater than SOMs. Thus, the Multi-SOMs algorithm is practical.

  20. High-power laser experiments to study collisionless shock generation

    Directory of Open Access Journals (Sweden)

    Sakawa Y.

    2013-11-01

    Full Text Available A collisionless Weibel-instability mediated shock in a self-generated magnetic field is studied using two-dimensional particle-in-cell simulation [Kato and Takabe, Astophys. J. Lett. 681, L93 (2008]. It is predicted that the generation of the Weibel shock requires to use NIF-class high-power laser system. Collisionless electrostatic shocks are produced in counter-streaming plasmas using Gekko XII laser system [Kuramitsu et al., Phys. Rev. Lett. 106, 175002 (2011]. A NIF facility time proposal is approved to study the formation of the collisionless Weibel shock. OMEGA and OMEGA EP experiments have been started to study the plasma conditions of counter-streaming plasmas required for the NIF experiment using Thomson scattering and to develop proton radiography diagnostics.

  1. Three-dimensional ultrasound. Early personal experience with a dedicated unit and literature review

    International Nuclear Information System (INIS)

    Cesarani, F.; Isolato, G.; Capello, S.; Bianchi, S.D.

    1999-01-01

    The authors report our preliminary clinical experience with three-dimensional ultrasound (3D US) in abdominal and small parts imaging, comparing the yield of 3D versus 2D US and the through a literature review [it

  2. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  3. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    Science.gov (United States)

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  4. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  5. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  6. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  7. Vibrational spectra and thermal rectification in three-dimensional anharmonic lattices

    International Nuclear Information System (INIS)

    Lan Jinghua; Li Baowen

    2007-01-01

    We study thermal rectification in a three-dimensional model consisting of two segments of anharmonic lattices. One segment consists of layers of harmonic oscillator arrays coupled to a substrate potential, which is a three-dimensional Frenkel-Kontorova model, and the other segment is a three-dimensional Fermi-Pasta-Ulam model. We study the vibrational bands of the two lattices analytically and numerically, and find that, by choosing the system parameters properly, the rectification can be as high as a few thousands, which is high enough to be observed in experiment. Possible experiments in nanostructures are discussed

  8. Design guidelines for high dimensional stability of CFRP optical bench

    Science.gov (United States)

    Desnoyers, Nichola; Boucher, Marc-André; Goyette, Philippe

    2013-09-01

    In carbon fiber reinforced plastic (CFRP) optomechanical structures, particularly when embodying reflective optics, angular stability is critical. Angular stability or warping stability is greatly affected by moisture absorption and thermal gradients. Unfortunately, it is impossible to achieve the perfect laminate and there will always be manufacturing errors in trying to reach a quasi-iso laminate. Some errors, such as those related to the angular position of each ply and the facesheet parallelism (for a bench) can be easily monitored in order to control the stability more adequately. This paper presents warping experiments and finite-element analyses (FEA) obtained from typical optomechanical sandwich structures. Experiments were done using a thermal vacuum chamber to cycle the structures from -40°C to 50°C. Moisture desorption tests were also performed for a number of specific configurations. The selected composite material for the study is the unidirectional prepreg from Tencate M55J/TC410. M55J is a high modulus fiber and TC410 is a new-generation cyanate ester designed for dimensionally stable optical benches. In the studied cases, the main contributors were found to be: the ply angular errors, laminate in-plane parallelism (between 0° ply direction of both facesheets), fiber volume fraction tolerance and joints. Final results show that some tested configurations demonstrated good warping stability. FEA and measurements are in good agreement despite the fact that some defects or fabrication errors remain unpredictable. Design guidelines to maximize the warping stability by taking into account the main dimensional stability contributors, the bench geometry and the optical mount interface are then proposed.

  9. Introduction to high-dimensional statistics

    CERN Document Server

    Giraud, Christophe

    2015-01-01

    Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha

  10. The Figured Worlds of High School Science Teachers: Uncovering Three-Dimensional Assessment Decisions

    Science.gov (United States)

    Ewald, Megan

    As a result of recent mandates of the Next Generation Science Standards, assessments are a "system of meaning" amidst a paradigm shift toward three-dimensional assessments. This study is motivated by two research questions: 1) how do high school science teachers describe their processes of decision-making in the development and use of three-dimensional assessments and 2) how do high school science teachers negotiate their identities as assessors in designing three-dimensional assessments. An important factor in teachers' assessment decision making is how they identify themselves as assessors. Therefore, this study investigated the teachers' roles as assessors through the Sociocultural Identity Theory. The most important contribution from this study is the emergent teacher assessment sub-identities: the modifier-recycler , the feeler-finder, and the creator. Using a qualitative phenomenological research design, focus groups, three-series interviews, think-alouds, and document analysis were utilized in this study. These qualitative methods were chosen to elicit rich conversations among teachers, make meaning of the teachers' experiences through in-depth interviews, amplify the thought processes of individual teachers while making assessment decisions, and analyze assessment documents in relation to teachers' perspectives. The findings from this study suggest that--of the 19 participants--only two teachers could consistently be identified as creators and aligned their assessment practices with NGSS. However, assessment sub-identities are not static and teachers may negotiate their identities from one moment to the next within socially constructed realms of interpretation known as figured worlds. Because teachers are positioned in less powerful figured worlds within the dominant discourse of standardization, this study raises awareness as to how the external pressures from more powerful figured worlds socially construct teachers' identities as assessors. For teachers

  11. Development and assessment of multi-dimensional flow model in MARS compared with the RPI air-water experiment

    International Nuclear Information System (INIS)

    Lee, Seok Min; Lee, Un Chul; Bae, Sung Won; Chung, Bub Dong

    2004-01-01

    The Multi-Dimensional flow models in system code have been developed during the past many years. RELAP5-3D, CATHARE and TRACE has its specific multi-dimensional flow models and successfully applied it to the system safety analysis. In KAERI, also, MARS(Multi-dimensional Analysis of Reactor Safety) code was developed by integrating RELAP5/MOD3 code and COBRA-TF code. Even though COBRA-TF module can analyze three-dimensional flow models, it has a limitation to apply 3D shear stress dominant phenomena or cylindrical geometry. Therefore, Multi-dimensional analysis models are newly developed by implementing three-dimensional momentum flux and diffusion terms. The multi-dimensional model has been assessed compared with multi-dimensional conceptual problems and CFD code results. Although the assessment results were reasonable, the multi-dimensional model has not been validated to two-phase flow using experimental data. In this paper, the multi-dimensional air-water two-phase flow experiment was simulated and analyzed

  12. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  13. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan

    2017-03-27

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  14. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert; Ombao, Hernando

    2017-01-01

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  15. Genuinely high-dimensional nonlocality optimized by complementary measurements

    International Nuclear Information System (INIS)

    Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung

    2010-01-01

    Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.

  16. FPGA Implementation of one-dimensional and two-dimensional cellular automata

    International Nuclear Information System (INIS)

    D'Antone, I.

    1999-01-01

    This report describes the hardware implementation of one-dimensional and two-dimensional cellular automata (CAs). After a general introduction to the cellular automata, we consider a one-dimensional CA used to implement pseudo-random techniques in built-in self test for VLSI. Due to the increase in digital ASIC complexity, testing is becoming one of the major costs in the VLSI production. The high electronics complexity, used in particle physics experiments, demands higher reliability than in the past time. General criterions are given to evaluate the feasibility of the circuit used for testing and some quantitative parameters are underlined to optimize the architecture of the cellular automaton. Furthermore, we propose a two-dimensional CA that performs a peak finding algorithm in a matrix of cells mapping a sub-region of a calorimeter. As in a two-dimensional filtering process, the peaks of the energy clusters are found in one evolution step. This CA belongs to Wolfram class II cellular automata. Some quantitative parameters are given to optimize the architecture of the cellular automaton implemented in a commercial field programmable gate array (FPGA)

  17. Topology of high-dimensional manifolds

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)

    2002-08-15

    The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.

  18. An angle-based subspace anomaly detection approach to high-dimensional data: With an application to industrial fault detection

    International Nuclear Information System (INIS)

    Zhang, Liangwei; Lin, Jing; Karim, Ramin

    2015-01-01

    The accuracy of traditional anomaly detection techniques implemented on full-dimensional spaces degrades significantly as dimensionality increases, thereby hampering many real-world applications. This work proposes an approach to selecting meaningful feature subspace and conducting anomaly detection in the corresponding subspace projection. The aim is to maintain the detection accuracy in high-dimensional circumstances. The suggested approach assesses the angle between all pairs of two lines for one specific anomaly candidate: the first line is connected by the relevant data point and the center of its adjacent points; the other line is one of the axis-parallel lines. Those dimensions which have a relatively small angle with the first line are then chosen to constitute the axis-parallel subspace for the candidate. Next, a normalized Mahalanobis distance is introduced to measure the local outlier-ness of an object in the subspace projection. To comprehensively compare the proposed algorithm with several existing anomaly detection techniques, we constructed artificial datasets with various high-dimensional settings and found the algorithm displayed superior accuracy. A further experiment on an industrial dataset demonstrated the applicability of the proposed algorithm in fault detection tasks and highlighted another of its merits, namely, to provide preliminary interpretation of abnormality through feature ordering in relevant subspaces. - Highlights: • An anomaly detection approach for high-dimensional reliability data is proposed. • The approach selects relevant subspaces by assessing vectorial angles. • The novel ABSAD approach displays superior accuracy over other alternatives. • Numerical illustration approves its efficacy in fault detection applications

  19. Two-dimensional impurity transport calculations for a high recycling divertor

    International Nuclear Information System (INIS)

    Brooks, J.N.

    1986-04-01

    Two dimensional analysis of impurity transport in a high recycling divertor shows asymmetric particle fluxes to the divertor plate, low helium pumping efficiency, and high scrapeoff zone shielding for sputtered impurities

  20. Simulation-Driven Development and Optimization of a High-Performance Six-Dimensional Wrist Force/Torque Sensor

    Directory of Open Access Journals (Sweden)

    Qiaokang LIANG

    2010-05-01

    Full Text Available This paper describes the Simulation-Driven Development and Optimization (SDDO of a six-dimensional force/torque sensor with high performance. By the implementation of the SDDO, the developed sensor possesses high performance such as high sensitivity, linearity, stiffness and repeatability simultaneously, which is hard for tranditional force/torque sensor. Integrated approach provided by software ANSYS was used to streamline and speed up the process chain and thereby to deliver results significantly faster than traditional approaches. The result of calibration experiment possesses some impressive characters, therefore the developed fore/torque sensor can be usefully used in industry and the methods of design can also be used to develop industrial product.

  1. On the Zeeman Effect in highly excited atoms: 2. Three-dimensional case

    International Nuclear Information System (INIS)

    Baseia, B.; Medeiros e Silva Filho, J.

    1984-01-01

    A previous result, found in two-dimensional hydrogen-atoms, is extended to the three-dimensional case. A mapping of a four-dimensional space R 4 onto R 3 , that establishes an equivalence between Coulomb and harmonic potentials, is used to show that the exact solution of the Zeeman effect in highly excited atoms, cannot be reached. (Author) [pt

  2. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huttmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.; Bednarczyk, P.

    1992-01-01

    High resolution γ-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig

  3. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S; Huttmeier, U J; France, G de; Haas, B; Romain, P; Theisen, Ch; Vivien, J P; Zen, J [Centre National de la Recherche Scientifique (CNRS), 67 - Strasbourg (France); Bednarczyk, P [Institute of Nuclear Physics, Cracow (Poland)

    1992-08-01

    High resolution {gamma}-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig.

  4. Two-dimensional numerical experiments with DRIX-2D on two-phase-water-flows referring to the HDR-blowdown-experiments

    International Nuclear Information System (INIS)

    Moesinger, H.

    1979-08-01

    The computer program DRIX-2D has been developed from SOLA-DF. The essential elements of the program structure are described. In order to verify DRIX-2D an Edwards-Blowdown-Experiment is calculated and other numerical results are compared with steady state experiments and models. Numerical experiments on transient two-phase flow, occurring in the broken pipe of a PWR in the case of a hypothetic LOCA, are performed. The essential results of the two-dimensional calculations are: 1. The appearance of a radial profile of void-fraction, velocity, sound speed and mass flow-rate inside the blowdown nozzle. The reason for this is the flow contraction at the nozzle inlet leading to more vapour production in the vicinity of the pipe wall. 2. A comparison between modelling in axisymmetric and Cartesian coordinates and calculations with and without the core barrel show the following: a) The three-dimensional flow pattern at the nozzle inlet is poorly described using Cartesian coordinates. In consequence a considerable difference in pressure history results. b) The core barrel alters the reflection behaviour of the pressure waves oscillating in the blowdown-nozzle. Therefore, the core barrel should be modelled as a wall normal to the nozzle axis. (orig./HP) [de

  5. On Robust Information Extraction from High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science

  6. An Eoetvoes versus a Galileo experiment: A study in two versus three-dimensional physics

    International Nuclear Information System (INIS)

    Hughes, R.J.; Nieto, M.M.; Goldman, T.

    1988-01-01

    We show how the net effect of two new approximately cancelling (vector and scalar) gravitational forces could produce a measureable effect from a horizontal thin slab in an Eoetvoes experiment, yet yield a null result at the same level for a Galileo experiment. The resolution is an example of two-versus three-dimensional physics and the cancelling nature of the two forces. Using two different earth models, we apply this result to the Australian mine gravity data of Stacey et al., the Brookhaven Eoetvoes experiment of Thieberger, and the Colorado Galileo experiment of Niebauer et al. (orig.)

  7. High-resolution coherent three-dimensional spectroscopy of Br2.

    Science.gov (United States)

    Chen, Peter C; Wells, Thresa A; Strangfeld, Benjamin R

    2013-07-25

    In the past, high-resolution spectroscopy has been limited to small, simple molecules that yield relatively uncongested spectra. Larger and more complex molecules have a higher density of peaks and are susceptible to complications (e.g., effects from conical intersections) that can obscure the patterns needed to resolve and assign peaks. Recently, high-resolution coherent two-dimensional (2D) spectroscopy has been used to resolve and sort peaks into easily identifiable patterns for molecules where pattern-recognition has been difficult. For very highly congested spectra, however, the ability to resolve peaks using coherent 2D spectroscopy is limited by the bandwidth of instrumentation. In this article, we introduce and investigate high-resolution coherent three-dimensional spectroscopy (HRC3D) as a method for dealing with heavily congested systems. The resulting patterns are unlike those in high-resolution coherent 2D spectra. Analysis of HRC3D spectra could provide a means for exploring the spectroscopy of large and complex molecules that have previously been considered too difficult to study.

  8. Two-dimensional cross-section sensitivity and uncertainty analysis of the LBM experience at LOTUS

    International Nuclear Information System (INIS)

    Davidson, J.W.; Dudziak, D.J.; Pelloni, S.; Stepanek, J.

    1989-01-01

    In recent years, the LOTUS fusion blanket facility at IGA-EPF in Lausanne provided a series of irradiation experiments with the Lithium Blanket Module (LBM). The LBM has both realistic fusion blanket and materials and configuration. It is approximately an 80-cm cube, and the breeding material is Li 2 . Using as the D-T neutron source the Haefely Neutron Generator (HNG) with an intensity of about 5·10 12 n/s, a series of experiments with the bare LBM as well as with the LBM preceded by Pb, Be and ThO 2 multipliers were carried out. In a recent common Los Alamos/PSI effort, a sensitivity and nuclear data uncertainty path for the modular code system AARE (Advanced Analysis for Reactor Engineering) was developed. This path includes the cross-section code TRAMIX, the one-dimensional finite difference S n -transport code ONEDANT, the two-dimensional finite element S n -transport code TRISM, and the one- and two-dimensional sensitivity and nuclear data uncertainty code SENSIBL. For the nucleonic transport calculations, three 187-neutron-group libraries are presently available: MATXS8A and MATXS8F based on ENDF/B-V evaluations and MAT187 based on JEF/EFF evaluations. COVFILS-2, a 74-group library of neutron cross-sections, scattering matrices and covariances, is the data source for SENSIBL; the 74-group structure of COVFILS-2 is a subset of the Los Alamos 187-group structure. Within the framework of the present work a complete set of forward and adjoint two-dimensional TRISM calculations were performed both for the bare, as well as for the Pb- and Be-preceded, LBM using MATXS8 libraries. Then a two-dimensional sensitivity and uncertainty analysis for all cases was performed

  9. Five and four dimensional experiments for robust backbone resonance assignment of large intrinsically disordered proteins: application to Tau3x protein

    International Nuclear Information System (INIS)

    Żerko, Szymon; Byrski, Piotr; Włodarczyk-Pruszyński, Paweł; Górka, Michał; Ledolter, Karin; Masliah, Eliezer; Konrat, Robert; Koźmiński, Wiktor

    2016-01-01

    New experiments dedicated for large IDPs backbone resonance assignment are presented. The most distinctive feature of all described techniques is the employment of MOCCA-XY16 mixing sequences to obtain effective magnetization transfers between carbonyl carbon backbone nuclei. The proposed 4 and 5 dimensional experiments provide a high dispersion of obtained signals making them suitable for use in the case of large IDPs (application to 354 a. a. residues of Tau protein 3x isoform is presented) as well as provide both forward and backward connectivities. What is more, connecting short chains interrupted with proline residues is also possible. All the experiments employ non-uniform sampling.

  10. Five and four dimensional experiments for robust backbone resonance assignment of large intrinsically disordered proteins: application to Tau3x protein

    Energy Technology Data Exchange (ETDEWEB)

    Żerko, Szymon; Byrski, Piotr; Włodarczyk-Pruszyński, Paweł; Górka, Michał [University of Warsaw, Faculty of Chemistry, Biological and Chemical Research Centre (Poland); Ledolter, Karin [University of Vienna, Department of Computational and Structural Biology, Max F. Perutz Laboratories (Austria); Masliah, Eliezer [University of California, San Diego, Departments of Neuroscience and Pathology (United States); Konrat, Robert [University of Vienna, Department of Computational and Structural Biology, Max F. Perutz Laboratories (Austria); Koźmiński, Wiktor, E-mail: kozmin@chem.uw.edu.pl [University of Warsaw, Faculty of Chemistry, Biological and Chemical Research Centre (Poland)

    2016-08-15

    New experiments dedicated for large IDPs backbone resonance assignment are presented. The most distinctive feature of all described techniques is the employment of MOCCA-XY16 mixing sequences to obtain effective magnetization transfers between carbonyl carbon backbone nuclei. The proposed 4 and 5 dimensional experiments provide a high dispersion of obtained signals making them suitable for use in the case of large IDPs (application to 354 a. a. residues of Tau protein 3x isoform is presented) as well as provide both forward and backward connectivities. What is more, connecting short chains interrupted with proline residues is also possible. All the experiments employ non-uniform sampling.

  11. Evaluation of one-dimensional and two-dimensional volatility basis sets in simulating the aging of secondary organic aerosol with smog-chamber experiments.

    Science.gov (United States)

    Zhao, Bin; Wang, Shuxiao; Donahue, Neil M; Chuang, Wayne; Hildebrandt Ruiz, Lea; Ng, Nga L; Wang, Yangjun; Hao, Jiming

    2015-02-17

    We evaluate the one-dimensional volatility basis set (1D-VBS) and two-dimensional volatility basis set (2D-VBS) in simulating the aging of SOA derived from toluene and α-pinene against smog-chamber experiments. If we simulate the first-generation products with empirical chamber fits and the subsequent aging chemistry with a 1D-VBS or a 2D-VBS, the models mostly overestimate the SOA concentrations in the toluene oxidation experiments. This is because the empirical chamber fits include both first-generation oxidation and aging; simulating aging in addition to this results in double counting of the initial aging effects. If the first-generation oxidation is treated explicitly, the base-case 2D-VBS underestimates the SOA concentrations and O:C increase of the toluene oxidation experiments; it generally underestimates the SOA concentrations and overestimates the O:C increase of the α-pinene experiments. With the first-generation oxidation treated explicitly, we could modify the 2D-VBS configuration individually for toluene and α-pinene to achieve good model-measurement agreement. However, we are unable to simulate the oxidation of both toluene and α-pinene with the same 2D-VBS configuration. We suggest that future models should implement parallel layers for anthropogenic (aromatic) and biogenic precursors, and that more modeling studies and laboratory research be done to optimize the "best-guess" parameters for each layer.

  12. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    Energy Technology Data Exchange (ETDEWEB)

    Aly, A. [North Carolina State Univ., Raleigh, NC (United States); Avramova, Maria [North Carolina State Univ., Raleigh, NC (United States); Ivanov, Kostadin [Pennsylvania State Univ., University Park, PA (United States); Motta, Arthur [Pennsylvania State Univ., University Park, PA (United States); Lacroix, E. [Pennsylvania State Univ., University Park, PA (United States); Manera, Annalisa [Univ. of Michigan, Ann Arbor, MI (United States); Walter, D. [Univ. of Michigan, Ann Arbor, MI (United States); Williamson, R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gamble, K. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-10-29

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed by data from hydrogen experiments and PIE data.

  13. Position sensitive detection coupled to high-resolution time-of-flight mass spectrometry: Imaging for molecular beam deflection experiments

    International Nuclear Information System (INIS)

    Abd El Rahim, M.; Antoine, R.; Arnaud, L.; Barbaire, M.; Broyer, M.; Clavier, Ch.; Compagnon, I.; Dugourd, Ph.; Maurelli, J.; Rayane, D.

    2004-01-01

    We have developed and tested a high-resolution time-of-flight mass spectrometer coupled to a position sensitive detector for molecular beam deflection experiments. The major achievement of this new spectrometer is to provide a three-dimensional imaging (X and Y positions and time-of-flight) of the ion packet on the detector, with a high acquisition rate and a high resolution on both the mass and the position. The calibration of the experimental setup and its application to molecular beam deflection experiments are discussed

  14. Inference in High-dimensional Dynamic Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Tang, Haihan

    We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...

  15. Manifold learning to interpret JET high-dimensional operational space

    International Nuclear Information System (INIS)

    Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A

    2013-01-01

    In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)

  16. Sensitivity studies and a simple ozone perturbation experiment with a truncated two-dimensional model of the stratosphere

    Science.gov (United States)

    Stordal, Frode; Garcia, Rolando R.

    1987-01-01

    The 1-1/2-D model of Holton (1986), which is actually a highly truncated two-dimensional model, describes latitudinal variations of tracer mixing ratios in terms of their projections onto second-order Legendre polynomials. The present study extends the work of Holton by including tracers with photochemical production in the stratosphere (O3 and NOy). It also includes latitudinal variations in the photochemical sources and sinks, improving slightly the calculated global mean profiles for the long-lived tracers studied by Holton and improving substantially the latitudinal behavior of ozone. Sensitivity tests of the dynamical parameters in the model are performed, showing that the response of the model to changes in vertical residual meridional winds and horizontal diffusion coefficients is similar to that of a full two-dimensional model. A simple ozone perturbation experiment shows the model's ability to reproduce large-scale latitudinal variations in total ozone column depletions as well as ozone changes in the chemically controlled upper stratosphere.

  17. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...

  18. Hall MHD Modeling of Two-dimensional Reconnection: Application to MRX Experiment

    International Nuclear Information System (INIS)

    Lukin, V.S.; Jardin, S.C.

    2003-01-01

    Two-dimensional resistive Hall magnetohydrodynamics (MHD) code is used to investigate the dynamical evolution of driven reconnection in the Magnetic Reconnection Experiment (MRX). The initial conditions and dimensionless parameters of the simulation are set to be similar to the experimental values. We successfully reproduce many features of the time evolution of magnetic configurations for both co- and counter-helicity reconnection in MRX. The Hall effect is shown to be important during the early dynamic X-phase of MRX reconnection, while effectively negligible during the late ''steady-state'' Y-phase, when plasma heating takes place. Based on simple symmetry considerations, an experiment to directly measure the Hall effect in MRX configuration is proposed and numerical evidence for the expected outcome is given

  19. High-dimensional single-cell cancer biology.

    Science.gov (United States)

    Irish, Jonathan M; Doxie, Deon B

    2014-01-01

    Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.

  20. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  1. High-Dimensional Quantum Information Processing with Linear Optics

    Science.gov (United States)

    Fitzpatrick, Casey A.

    Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for

  2. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  3. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca

    2013-01-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  4. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza

    2013-08-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  5. Designs for highly nonlinear ablative Rayleigh-Taylor experiments on the National Ignition Facility

    International Nuclear Information System (INIS)

    Casner, A.; Masse, L.; Liberatore, S.; Jacquet, L.; Loiseau, P.; Poujade, O.; Smalyuk, V. A.; Bradley, D. K.; Park, H. S.; Remington, B. A.; Igumenshchev, I.; Chicanne, C.

    2012-01-01

    We present two designs relevant to ablative Rayleigh-Taylor instability in transition from weakly nonlinear to highly nonlinear regimes at the National Ignition Facility [E. I. Moses, J. Phys.: Conf. Ser. 112, 012003 (2008)]. The sensitivity of nonlinear Rayleigh-Taylor instability physics to ablation velocity is addressed with targets driven by indirect drive, with stronger ablative stabilization, and by direct drive, with weaker ablative stabilization. The indirect drive design demonstrates the potential to reach a two-dimensional bubble-merger regime with a 20 ns duration drive at moderate radiation temperature. The direct drive design achieves a 3 to 5 times increased acceleration distance for the sample in comparison to previous experiments allowing at least 2 more bubble generations when starting from a three-dimensional broadband spectrum.

  6. Designs for highly nonlinear ablative Rayleigh-Taylor experiments on the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Casner, A.; Masse, L.; Liberatore, S.; Jacquet, L.; Loiseau, P.; Poujade, O. [CEA, DAM, DIF, F-91297 Arpajon (France); Smalyuk, V. A.; Bradley, D. K.; Park, H. S.; Remington, B. A. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Igumenshchev, I. [Laboratory of Laser Energetics, University of Rochester, Rochester, New York 14623-1299 (United States); Chicanne, C. [CEA, DAM, VALDUC, F-21120 Is-sur-Tille (France)

    2012-08-15

    We present two designs relevant to ablative Rayleigh-Taylor instability in transition from weakly nonlinear to highly nonlinear regimes at the National Ignition Facility [E. I. Moses, J. Phys.: Conf. Ser. 112, 012003 (2008)]. The sensitivity of nonlinear Rayleigh-Taylor instability physics to ablation velocity is addressed with targets driven by indirect drive, with stronger ablative stabilization, and by direct drive, with weaker ablative stabilization. The indirect drive design demonstrates the potential to reach a two-dimensional bubble-merger regime with a 20 ns duration drive at moderate radiation temperature. The direct drive design achieves a 3 to 5 times increased acceleration distance for the sample in comparison to previous experiments allowing at least 2 more bubble generations when starting from a three-dimensional broadband spectrum.

  7. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  8. Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids

    NARCIS (Netherlands)

    bin Zubair, H.; Oosterlee, C.E.; Wienands, R.

    2006-01-01

    This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We

  9. Interface between path and orbital angular momentum entanglement for high-dimensional photonic quantum information.

    Science.gov (United States)

    Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton

    2014-07-30

    Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.

  10. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  11. Experiment of flow regime map and local condensing heat transfer coefficients inside three dimensional inner microfin tubes

    Science.gov (United States)

    Du, Yang; Xin, Ming Dao

    1999-03-01

    This paper developed a new type of three dimensional inner microfin tube. The experimental results of the flow patterns for the horizontal condensation inside these tubes are reported in the paper. The flow patterns for the horizontal condensation inside the new made tubes are divided into annular flow, stratified flow and intermittent flow within the test conditions. The experiments of the local heat transfer coefficients for the different flow patterns have been systematically carried out. The experiments of the local heat transfer coefficients changing with the vapor dryness fraction have also been carried out. As compared with the heat transfer coefficients of the two dimensional inner microfin tubes, those of the three dimensional inner microfin tubes increase 47-127% for the annular flow region, 38-183% for the stratified flow and 15-75% for the intermittent flow, respectively. The enhancement factor of the local heat transfer coefficients is from 1.8-6.9 for the vapor dryness fraction from 0.05 to 1.

  12. Numerical experiment on different validation cases of water coolant flow in supercritical pressure test sections assisted by discriminated dimensional analysis part I: the dimensional analysis

    International Nuclear Information System (INIS)

    Kiss, A.; Aszodi, A.

    2011-01-01

    As recent studies prove in contrast to 'classical' dimensional analysis, whose application is widely described in heat transfer textbooks despite its poor results, the less well known and used discriminated dimensional analysis approach can provide a deeper insight into the physical problems involved and much better results in all cases where it is applied. As a first step of this ongoing research discriminated dimensional analysis has been performed on supercritical pressure water pipe flow heated through the pipe solid wall to identify the independent dimensionless groups (which play an independent role in the above mentioned thermal hydraulic phenomena) in order to serve a theoretical base to comparison between well known supercritical pressure water pipe heat transfer experiments and results of their validated CFD simulations. (author)

  13. Elucidating high-dimensional cancer hallmark annotation via enriched ontology.

    Science.gov (United States)

    Yan, Shankai; Wong, Ka-Chun

    2017-09-01

    Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. NMR experiments on a three-dimensional vibrofluidized granular medium

    Science.gov (United States)

    Huan, Chao; Yang, Xiaoyu; Candela, D.; Mair, R. W.; Walsworth, R. L.

    2004-04-01

    A three-dimensional granular system fluidized by vertical container vibrations was studied using pulsed field gradient NMR coupled with one-dimensional magnetic resonance imaging. The system consisted of mustard seeds vibrated vertically at 50 Hz, and the number of layers Nl⩽4 was sufficiently low to achieve a nearly time-independent granular fluid. Using NMR, the vertical profiles of density and granular temperature were directly measured, along with the distributions of vertical and horizontal grain velocities. The velocity distributions showed modest deviations from Maxwell-Boltzmann statistics, except for the vertical velocity distribution near the sample bottom, which was highly skewed and non-Gaussian. Data taken for three values of Nl and two dimensionless accelerations Γ=15,18 were fitted to a hydrodynamic theory, which successfully models the density and temperature profiles away from the vibrating container bottom. A temperature inversion near the free upper surface is observed, in agreement with predictions based on the hydrodynamic parameter μ which is nonzero only in inelastic systems.

  15. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  16. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    International Nuclear Information System (INIS)

    Zhang, Yuxiao; Zhang, Jianming; Liu, Yang; Huang, Hui; Kang, Zhenhui

    2012-01-01

    Highlights: ► Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. ► MPCS was covalently modified by cysteine (MPCS–CO–Cys). ► MPCS–CO–Cys was first time used in electrochemical detection of heavy metal ions. ► Heavy metal ions such as Pb 2+ and Cd 2+ can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  17. Phonons in a one-dimensional Yukawa chain: Dusty plasma experiment and model

    International Nuclear Information System (INIS)

    Liu Bin; Goree, J.

    2005-01-01

    Phonons in a one-dimensional chain of charged microspheres suspended in a plasma were studied in an experiment. The phonons correspond to random particle motion in the chain; no external manipulation was applied to excite the phonons. Two modes were observed, longitudinal and transverse. The velocity fluctuations in the experiment are analyzed using current autocorrelation functions and a phonon spectrum. The phonon energy was found to be unequally partitioned among phonon modes in the dusty plasma experiment. The experimental phonon spectrum was characterized by a dispersion relation that was found to differ from the dispersion relation for externally excited phonons. This difference is attributed to the presence of frictional damping due to gas, which affects the propagation of externally excited phonons differently from phonons that correspond to random particle motion. A model is developed and fit to the experiment to explain the features of the autocorrelation function, phonon spectrum, and the dispersion relation

  18. One dimensional two-body collisions experiment based on LabVIEW interface with Arduino

    Science.gov (United States)

    Saphet, Parinya; Tong-on, Anusorn; Thepnurat, Meechai

    2017-09-01

    The purpose of this work is to build a physics lab apparatus that is modern, low-cost and simple. In one dimensional two-body collisions experiment, we used the Arduino UNO R3 as a data acquisition system which was controlled by LabVIEW program. The photogate sensors were designed using LED and LDR to measure position as a function of the time. Aluminium frame houseware and blower were used for the air track system. In both totally inelastic and elastic collision experiments, the results of momentum and energy conservation are in good agreement with the theoretical calculations.

  19. Three-Dimensional Electromagnetic High Frequency Axisymmetric Cavity Scars.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt

    2014-10-01

    This report examines the localization of high frequency electromagnetic fi elds in three-dimensional axisymmetric cavities along periodic paths between opposing sides of the cavity. The cases where these orbits lead to unstable localized modes are known as scars. This report treats both the case where the opposing sides, or mirrors, are convex, where there are no interior foci, and the case where they are concave, leading to interior foci. The scalar problem is treated fi rst but the approximations required to treat the vector fi eld components are also examined. Particular att ention is focused on the normalization through the electromagnetic energy theorem. Both projections of the fi eld along the scarred orbit as well as point statistics are examined. Statistical comparisons are m ade with a numerical calculation of the scars run with an axisymmetric simulation. This axisymmetric cas eformstheoppositeextreme(wherethetwomirror radii at each end of the ray orbit are equal) from the two -dimensional solution examined previously (where one mirror radius is vastly di ff erent from the other). The enhancement of the fi eldontheorbitaxiscanbe larger here than in the two-dimensional case. Intentionally Left Blank

  20. High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems

    International Nuclear Information System (INIS)

    Wachowiak, M P; Sarlo, B B; Foster, A E Lambe

    2014-01-01

    Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task

  1. A hybridized K-means clustering approach for high dimensional ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.

  2. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    Science.gov (United States)

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  3. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  4. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  5. Computational Search for Two-Dimensional MX2 Semiconductors with Possible High Electron Mobility at Room Temperature

    Directory of Open Access Journals (Sweden)

    Zhishuo Huang

    2016-08-01

    Full Text Available Neither of the two typical two-dimensional materials, graphene and single layer MoS 2 , are good enough for developing semiconductor logical devices. We calculated the electron mobility of 14 two-dimensional semiconductors with composition of MX 2 , where M (=Mo, W, Sn, Hf, Zr and Pt are transition metals, and Xs are S, Se and Te. We approximated the electron phonon scattering matrix by deformation potentials, within which long wave longitudinal acoustical and optical phonon scatterings were included. Piezoelectric scattering in the compounds without inversion symmetry is also taken into account. We found that out of the 14 compounds, WS 2 , PtS 2 and PtSe 2 are promising for logical devices regarding the possible high electron mobility and finite band gap. Especially, the phonon limited electron mobility in PtSe 2 reaches about 4000 cm 2 ·V - 1 ·s - 1 at room temperature, which is the highest among the compounds with an indirect bandgap of about 1.25 eV under the local density approximation. Our results can be the first guide for experiments to synthesize better two-dimensional materials for future semiconductor devices.

  6. ONE-DIMENSIONAL AND TWO-DIMENSIONAL LEADERSHIP STYLES

    Directory of Open Access Journals (Sweden)

    Nikola Stefanović

    2007-06-01

    Full Text Available In order to motivate their group members to perform certain tasks, leaders use different leadership styles. These styles are based on leaders' backgrounds, knowledge, values, experiences, and expectations. The one-dimensional styles, used by many world leaders, are autocratic and democratic styles. These styles lie on the two opposite sides of the leadership spectrum. In order to precisely define the leadership styles on the spectrum between the autocratic leadership style and the democratic leadership style, leadership theory researchers use two dimensional matrices. The two-dimensional matrices define leadership styles on the basis of different parameters. By using these parameters, one can identify two-dimensional styles.

  7. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuxiao; Zhang, Jianming [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Liu, Yang, E-mail: yangl@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Huang, Hui [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Kang, Zhenhui, E-mail: zhkang@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. Black-Right-Pointing-Pointer MPCS was covalently modified by cysteine (MPCS-CO-Cys). Black-Right-Pointing-Pointer MPCS-CO-Cys was first time used in electrochemical detection of heavy metal ions. Black-Right-Pointing-Pointer Heavy metal ions such as Pb{sup 2+} and Cd{sup 2+} can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  8. Irregular grid methods for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.

    2004-01-01

    This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of

  9. Three-dimensional turbulent swirling flow in a cylinder: Experiments and computations

    International Nuclear Information System (INIS)

    Gupta, Amit; Kumar, Ranganathan

    2007-01-01

    Dynamics of the three-dimensional flow in a cyclone with tangential inlet and tangential exit were studied using particle tracking velocimetry (PTV) and a three-dimensional computational model. The PTV technique is described in this paper and appears to be well suited for the current flow situation. The flow was helical in nature and a secondary recirculating flow was observed and well predicted by computations using the RNG k-ε turbulence model. The secondary flow was characterized by a single vortex which circulated around the axis and occupied a large fraction of the cylinder diameter. The locus of the vortex center meandered around the cylinder axis, making one complete revolution for a cylinder aspect ratio of 2. Tangential velocities from both experiments and computations were compared and found to be in good agreement. The general structure of the flow does not vary significantly as the Reynolds number is increased. However, slight changes in all components of velocity and pressure were seen as the inlet velocity is increased. By increasing the inlet aspect ratio it was observed that the vortex meandering changed significantly

  10. Three-dimensional turbulent swirling flow in a cylinder: Experiments and computations

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, Amit [Department of Mechanical, Materials and Aerospace Engineering, University of Central Florida, Orlando, FL 32816 (United States); Kumar, Ranganathan [Department of Mechanical, Materials and Aerospace Engineering, University of Central Florida, Orlando, FL 32816 (United States)]. E-mail: rnkumar@mail.ucf.edu

    2007-04-15

    Dynamics of the three-dimensional flow in a cyclone with tangential inlet and tangential exit were studied using particle tracking velocimetry (PTV) and a three-dimensional computational model. The PTV technique is described in this paper and appears to be well suited for the current flow situation. The flow was helical in nature and a secondary recirculating flow was observed and well predicted by computations using the RNG k-{epsilon} turbulence model. The secondary flow was characterized by a single vortex which circulated around the axis and occupied a large fraction of the cylinder diameter. The locus of the vortex center meandered around the cylinder axis, making one complete revolution for a cylinder aspect ratio of 2. Tangential velocities from both experiments and computations were compared and found to be in good agreement. The general structure of the flow does not vary significantly as the Reynolds number is increased. However, slight changes in all components of velocity and pressure were seen as the inlet velocity is increased. By increasing the inlet aspect ratio it was observed that the vortex meandering changed significantly.

  11. Three-dimensional modelling of an injection experiment in the anaerobic part of a landfill plume

    DEFF Research Database (Denmark)

    Juul Petersen, Michael; Engesgaard, Peter Knudegaard; Bjerg, Poul Løgstrup

    1998-01-01

    Analytical and numerical three-dimensional (3-D) simulations have been conducted and compared to data obtained from a large-scale (50 m), natural gradient field injection experiment. Eighteen different xenobiotic compounds (i.e. benzene, toluene, o-xylene, naphthalene, 1,1,1-TCA, PCE, and TCE...

  12. High-dimensional atom localization via spontaneously generated coherence in a microwave-driven atomic system.

    Science.gov (United States)

    Wang, Zhiping; Chen, Jinyu; Yu, Benli

    2017-02-20

    We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.

  13. Mitigating the Insider Threat Using High-Dimensional Search and Modeling

    National Research Council Canada - National Science Library

    Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago

    2006-01-01

    In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...

  14. Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids

    International Nuclear Information System (INIS)

    Jakeman, John D.; Archibald, Richard; Xiu Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  15. Pricing High-Dimensional American Options Using Local Consistency Conditions

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and

  16. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    International Nuclear Information System (INIS)

    Hayashi, Y.; Hirose, Y.; Seno, Y.

    2016-01-01

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 "3 voxels was obtained.

  17. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Y., E-mail: y-hayashi@mosk.tytlabs.co.jp; Hirose, Y.; Seno, Y. [Toyota Central R& D Toyota Central R& D Labs., Inc., 41-1 Nagakute Aichi 480-1192 Japan (Japan)

    2016-07-27

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 {sup 3} voxels was obtained.

  18. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  19. Preface [HD3-2015: International meeting on high-dimensional data-driven science

    International Nuclear Information System (INIS)

    2016-01-01

    A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)

  20. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models.

    Directory of Open Access Journals (Sweden)

    Cemal Cagatay Bilgin

    Full Text Available BioSig3D is a computational platform for high-content screening of three-dimensional (3D cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i morphogenesis of a panel of human mammary epithelial cell lines (HMEC, and (ii heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation.

  1. Class prediction for high-dimensional class-imbalanced data

    Directory of Open Access Journals (Sweden)

    Lusa Lara

    2010-10-01

    Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class

  2. Lithium decoration of three dimensional boron-doped graphene frameworks for high-capacity hydrogen storage

    International Nuclear Information System (INIS)

    Wang, Yunhui; Meng, Zhaoshun; Liu, Yuzhen; You, Dongsen; Wu, Kai; Lv, Jinchao; Wang, Xuezheng; Deng, Kaiming; Lu, Ruifeng; Rao, Dewei

    2015-01-01

    Based on density functional theory and the first principles molecular dynamics simulations, a three-dimensional B-doped graphene-interconnected framework has been constructed that shows good thermal stability even after metal loading. The average binding energy of adsorbed Li atoms on the proposed material (2.64 eV) is considerably larger than the cohesive energy per atom of bulk Li metal (1.60 eV). This value is ideal for atomically dispersed Li doping in experiments. From grand canonical Monte Carlo simulations, high hydrogen storage capacities of 5.9 wt% and 52.6 g/L in the Li-decorated material are attained at 298 K and 100 bars

  3. Three-dimensional true FISP for high-resolution imaging of the whole brain

    International Nuclear Information System (INIS)

    Schmitz, B.; Hagen, T.; Reith, W.

    2003-01-01

    While high-resolution T1-weighted sequences, such as three-dimensional magnetization-prepared rapid gradient-echo imaging, are widely available, there is a lack of an equivalent fast high-resolution sequence providing T2 contrast. Using fast high-performance gradient systems we show the feasibility of three-dimensional true fast imaging with steady-state precession (FISP) to fill this gap. We applied a three-dimensional true-FISP protocol with voxel sizes down to 0.5 x 0.5 x 0.5 mm and acquisition times of approximately 8 min on a 1.5-T Sonata (Siemens, Erlangen, Germany) magnetic resonance scanner. The sequence was included into routine brain imaging protocols for patients with cerebrospinal-fluid-related intracranial pathology. Images from 20 patients and 20 healthy volunteers were evaluated by two neuroradiologists with respect to diagnostic image quality and artifacts. All true-FISP scans showed excellent imaging quality free of artifacts in patients and volunteers. They were valuable for the assessment of anatomical and pathologic aspects of the included patients. High-resolution true-FISP imaging is a valuable adjunct for the exploration and neuronavigation of intracranial pathologies especially if cerebrospinal fluid is involved. (orig.)

  4. Experiment and simulation on one-dimensional plasma photonic crystals

    International Nuclear Information System (INIS)

    Zhang, Lin; Ouyang, Ji-Ting

    2014-01-01

    The transmission characteristics of microwaves passing through one-dimensional plasma photonic crystals (PPCs) have been investigated by experiment and simulation. The PPCs were formed by a series of discharge tubes filled with argon at 5 Torr that the plasma density in tubes can be varied by adjusting the discharge current. The transmittance of X-band microwaves through the crystal structure was measured under different discharge currents and geometrical parameters. The finite-different time-domain method was employed to analyze the detailed properties of the microwaves propagation. The results show that there exist bandgaps when the plasma is turned on. The properties of bandgaps depend on the plasma density and the geometrical parameters of the PPCs structure. The PPCs can perform as dynamical band-stop filter to control the transmission of microwaves within a wide frequency range

  5. Global communication schemes for the numerical solution of high-dimensional PDEs

    DEFF Research Database (Denmark)

    Hupp, Philipp; Heene, Mario; Jacob, Riko

    2016-01-01

    The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...

  6. Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.

    Science.gov (United States)

    Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen

    2017-12-01

    In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.

  7. Construction of high-dimensional universal quantum logic gates using a Λ system coupled with a whispering-gallery-mode microresonator.

    Science.gov (United States)

    He, Ling Yan; Wang, Tie-Jun; Wang, Chuan

    2016-07-11

    High-dimensional quantum system provides a higher capacity of quantum channel, which exhibits potential applications in quantum information processing. However, high-dimensional universal quantum logic gates is difficult to achieve directly with only high-dimensional interaction between two quantum systems and requires a large number of two-dimensional gates to build even a small high-dimensional quantum circuits. In this paper, we propose a scheme to implement a general controlled-flip (CF) gate where the high-dimensional single photon serve as the target qudit and stationary qubits work as the control logic qudit, by employing a three-level Λ-type system coupled with a whispering-gallery-mode microresonator. In our scheme, the required number of interaction times between the photon and solid state system reduce greatly compared with the traditional method which decomposes the high-dimensional Hilbert space into 2-dimensional quantum space, and it is on a shorter temporal scale for the experimental realization. Moreover, we discuss the performance and feasibility of our hybrid CF gate, concluding that it can be easily extended to a 2n-dimensional case and it is feasible with current technology.

  8. The dimensionality of stellar chemical space using spectra from the Apache Point Observatory Galactic Evolution Experiment

    Science.gov (United States)

    Price-Jones, Natalie; Bovy, Jo

    2018-03-01

    Chemical tagging of stars based on their similar compositions can offer new insights about the star formation and dynamical history of the Milky Way. We investigate the feasibility of identifying groups of stars in chemical space by forgoing the use of model derived abundances in favour of direct analysis of spectra. This facilitates the propagation of measurement uncertainties and does not pre-suppose knowledge of which elements are important for distinguishing stars in chemical space. We use ˜16 000 red giant and red clump H-band spectra from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and perform polynomial fits to remove trends not due to abundance-ratio variations. Using expectation maximized principal component analysis, we find principal components with high signal in the wavelength regions most important for distinguishing between stars. Different subsamples of red giant and red clump stars are all consistent with needing about 10 principal components to accurately model the spectra above the level of the measurement uncertainties. The dimensionality of stellar chemical space that can be investigated in the H band is therefore ≲10. For APOGEE observations with typical signal-to-noise ratios of 100, the number of chemical space cells within which stars cannot be distinguished is approximately 1010±2 × (5 ± 2)n - 10 with n the number of principal components. This high dimensionality and the fine-grained sampling of chemical space are a promising first step towards chemical tagging based on spectra alone.

  9. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    Science.gov (United States)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  10. Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement

    Directory of Open Access Journals (Sweden)

    Xianglei Liu

    2018-01-01

    Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.

  11. Secure data storage by three-dimensional absorbers in highly scattering volume medium

    International Nuclear Information System (INIS)

    Matoba, Osamu; Matsuki, Shinichiro; Nitta, Kouichi

    2008-01-01

    A novel data storage in a volume medium with highly scattering coefficient is proposed for data security application. Three-dimensional absorbers are used as data. These absorbers can not be measured by interferometer when the scattering in a volume medium is strong enough. We present a method to reconstruct three-dimensional absorbers and present numerical results to show the effectiveness of the proposed data storage.

  12. Chemical shift-dependent apparent scalar couplings: An alternative concept of chemical shift monitoring in multi-dimensional NMR experiments

    International Nuclear Information System (INIS)

    Kwiatkowski, Witek; Riek, Roland

    2003-01-01

    The paper presents an alternative technique for chemical shift monitoring in a multi-dimensional NMR experiment. The monitored chemical shift is coded in the line-shape of a cross-peak through an apparent residual scalar coupling active during an established evolution period or acquisition. The size of the apparent scalar coupling is manipulated with an off-resonance radio-frequency pulse in order to correlate the size of the coupling with the position of the additional chemical shift. The strength of this concept is that chemical shift information is added without an additional evolution period and accompanying polarization transfer periods. This concept was incorporated into the three-dimensional triple-resonance experiment HNCA, adding the information of 1 H α chemical shifts. The experiment is called HNCA coded HA, since the chemical shift of 1 H α is coded in the line-shape of the cross-peak along the 13 C α dimension

  13. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems.

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    Full Text Available Harmony Search (HS and Teaching-Learning-Based Optimization (TLBO as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.

  14. A Near-linear Time Approximation Algorithm for Angle-based Outlier Detection in High-dimensional Data

    DEFF Research Database (Denmark)

    Pham, Ninh Dang; Pagh, Rasmus

    2012-01-01

    projection-based technique that is able to estimate the angle-based outlier factor for all data points in time near-linear in the size of the data. Also, our approach is suitable to be performed in parallel environment to achieve a parallel speedup. We introduce a theoretical analysis of the quality...... neighbor are deteriorated in high-dimensional data. Following up on the work of Kriegel et al. (KDD '08), we investigate the use of angle-based outlier factor in mining high-dimensional outliers. While their algorithm runs in cubic time (with a quadratic time heuristic), we propose a novel random......Outlier mining in d-dimensional point sets is a fundamental and well studied data mining task due to its variety of applications. Most such applications arise in high-dimensional domains. A bottleneck of existing approaches is that implicit or explicit assessments on concepts of distance or nearest...

  15. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  16. Separate-effects experiments on the hydrodynamics of air ingress phenomena for the very high temperature reactor

    International Nuclear Information System (INIS)

    Kim, S.; Talley, J.; Yadav, M.; Ireland, A.; Bajorek, S.

    2011-01-01

    The present study performs scaled separate-effects experiments to investigate the hydrodynamics in the air-ingress phenomena following a Depressurized Condition Cooldown in the Very High Temperature Gas-Cooled Reactor. First, a scoping experiment using water and brine is performed. The volumetric exchange rate is measured using a hydrometer, and flow visualizations are performed. Next, Helium-air experiments are performed to obtain three-dimensional oxygen concentration transient data using an oxygen analyzer. It is found that there exists a critical density difference ratio, before which the ingress rate increases linearly with time and after which the ingress rate slows down significantly. In both the water-brine and Helium-air experiments, this critical ratio is found to be approximately 0.7. (author)

  17. Separate-effects experiments on the hydrodynamics of air ingress phenomena for the very high temperature reactor

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S.; Talley, J.; Yadav, M., E-mail: skim@psu.edu [The Pennsylvania State Univ., University Park, Pennsylvania (United States); Ireland, A.; Bajorek, S. [The United States Nuclear Regulatory Commission, Washington DC (United States)

    2011-07-01

    The present study performs scaled separate-effects experiments to investigate the hydrodynamics in the air-ingress phenomena following a Depressurized Condition Cooldown in the Very High Temperature Gas-Cooled Reactor. First, a scoping experiment using water and brine is performed. The volumetric exchange rate is measured using a hydrometer, and flow visualizations are performed. Next, Helium-air experiments are performed to obtain three-dimensional oxygen concentration transient data using an oxygen analyzer. It is found that there exists a critical density difference ratio, before which the ingress rate increases linearly with time and after which the ingress rate slows down significantly. In both the water-brine and Helium-air experiments, this critical ratio is found to be approximately 0.7. (author)

  18. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    Science.gov (United States)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  19. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  20. Linear stability theory as an early warning sign for transitions in high dimensional complex systems

    International Nuclear Information System (INIS)

    Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft

    2016-01-01

    We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)

  1. High-dimensional quantum channel estimation using classical light

    CSIR Research Space (South Africa)

    Mabena, Chemist M

    2017-11-01

    Full Text Available stream_source_info Mabena_20007_2017.pdf.txt stream_content_type text/plain stream_size 960 Content-Encoding UTF-8 stream_name Mabena_20007_2017.pdf.txt Content-Type text/plain; charset=UTF-8 PHYSICAL REVIEW A 96, 053860... (2017) High-dimensional quantum channel estimation using classical light Chemist M. Mabena CSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa and School of Physics, University of the Witwatersrand, Johannesburg 2000, South...

  2. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  3. An irregular grid approach for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2008-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting. The method is centered around the approximation of the associated complementarity problem on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  4. An Irregular Grid Approach for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting.The method is centred around the approximation of the associated complementarity problem on an irregular grid.We approximate the partial differential operator on this grid by appealing to the SDE

  5. High dimensional ICA analysis detects within-network functional connectivity damage of default mode and sensory motor networks in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Ottavia eDipasquale

    2015-02-01

    Full Text Available High dimensional independent component analysis (ICA, compared to low dimensional ICA, allows performing a detailed parcellation of the resting state networks. The purpose of this study was to give further insight into functional connectivity (FC in Alzheimer’s disease (AD using high dimensional ICA. For this reason, we performed both low and high dimensional ICA analyses of resting state fMRI (rfMRI data of 20 healthy controls and 21 AD patients, focusing on the primarily altered default mode network (DMN and exploring the sensory motor network (SMN. As expected, results obtained at low dimensionality were in line with previous literature. Moreover, high dimensional results allowed us to observe either the presence of within-network disconnections and FC damage confined to some of the resting state sub-networks. Due to the higher sensitivity of the high dimensional ICA analysis, our results suggest that high-dimensional decomposition in sub-networks is very promising to better localize FC alterations in AD and that FC damage is not confined to the default mode network.

  6. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  7. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  8. High temperature graphite irradiation creep experiment in the Dragon Reactor. Dragon Project report

    Energy Technology Data Exchange (ETDEWEB)

    Manzel, R.; Everett, M. R.; Graham, L. W.

    1971-05-15

    The irradiation induced creep of pressed Gilsocarbon graphite under constant tensile stress has been investigated in an experiment carried out in FE 317 of the OECD High Temperature Gass Cooled Reactor ''Dragon'' at Winfrith (England). The experiment covered a temperature range of 850 dec C to 1240 deg C and reached a maximum fast neutron dose of 1.19 x 1021 n cm-2 NDE (Nickel Dose DIDO Equivalent). Irradiation induced dimensional changes of a string of unrestrained graphite specimens are compared with the dimensional changes of three strings of restrained graphite specimens stressed to 40%, 58%, and 70% of the initial ultimate tensile strength of pressed Gilsocarbon graphite. Total creep strains ranging from 0.18% to 1.25% have been measured and a linear dependence of creep strain on applied stress was observed. Mechanical property measurements carried out before and after irradiation demonstrate that Gilsocarbon graphite can accommodate significant creep strains without failure or structural deterioration. Total creep strains are in excellent agreement with other data, however the results indicate a relatively large temperature dependent primary creep component which at 1200 deg C approaches a value which is three times larger than the normally assumed initial elastic strain. Secondary creep constants derived from the experiment show a temperature dependence and are in fair agreement with data reported elsewhere. A possible determination of the results is given.

  9. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and "hidden" dimensions.

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.

  10. Four-dimensional (4D) tracking of high-temperature microparticles

    International Nuclear Information System (INIS)

    Wang, Zhehui; Liu, Q.; Waganaar, W.; Fontanese, J.; James, D.; Munsat, T.

    2016-01-01

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  11. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  12. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J. [Strasbourg-1 Univ., 67 (France). Centre de Recherches Nucleaires

    1992-12-31

    A new generation of high resolution {gamma}-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold {gamma}-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs.

  13. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.

    1992-01-01

    A new generation of high resolution γ-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold γ-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs

  14. Optimizing separations in online comprehensive two-dimensional liquid chromatography.

    Science.gov (United States)

    Pirok, Bob W J; Gargano, Andrea F G; Schoenmakers, Peter J

    2018-01-01

    Online comprehensive two-dimensional liquid chromatography has become an attractive option for the analysis of complex nonvolatile samples found in various fields (e.g. environmental studies, food, life, and polymer sciences). Two-dimensional liquid chromatography complements the highly popular hyphenated systems that combine liquid chromatography with mass spectrometry. Two-dimensional liquid chromatography is also applied to the analysis of samples that are not compatible with mass spectrometry (e.g. high-molecular-weight polymers), providing important information on the distribution of the sample components along chemical dimensions (molecular weight, charge, lipophilicity, stereochemistry, etc.). Also, in comparison with conventional one-dimensional liquid chromatography, two-dimensional liquid chromatography provides a greater separation power (peak capacity). Because of the additional selectivity and higher peak capacity, the combination of two-dimensional liquid chromatography with mass spectrometry allows for simpler mixtures of compounds to be introduced in the ion source at any given time, improving quantitative analysis by reducing matrix effects. In this review, we summarize the rationale and principles of two-dimensional liquid chromatography experiments, describe advantages and disadvantages of combining different selectivities and discuss strategies to improve the quality of two-dimensional liquid chromatography separations. © 2017 The Authors. Journal of Separation Science published by WILEY-VCH Verlag GmbH & Co. KGaA.

  15. Laboratory setup and results of experiments on two-dimensional multiphase flow in porous media

    International Nuclear Information System (INIS)

    McBride, J.F.; Graham, D.N.

    1990-10-01

    In the event of an accidental release into earth's subsurface of an immiscible organic liquid, such as a petroleum hydrocarbon or chlorinated organic solvent, the spatial and temporal distribution of the organic liquid is of great interest when considering efforts to prevent groundwater contamination or restore contaminated groundwater. An accurate prediction of immiscible organic liquid migration requires the incorporation of relevant physical principles in models of multiphase flow in porous media; these physical principles must be determined from physical experiments. This report presents a series of such experiments performed during the 1970s at the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland. The experiments were designed to study the transient, two-dimensional displacement of three immiscible fluids in a porous medium. This experimental study appears to be the most detailed published to date. The data obtained from these experiments are suitable for the validation and test calibration of multiphase flow codes. 73 refs., 140 figs

  16. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  17. Simplification of coding of NRU loop experiment software with dimensional generator

    International Nuclear Information System (INIS)

    Davis, R. S.

    2006-01-01

    The following are specific topics of this paper: 1.There is much creativity in the manner in which Dimensional Generator can be applied to a specific programming task [2]. This paper tells how Dimensional Generator was applied to a reactor-physics task. 2. In this first practical use, Dimensional Generator itself proved not to need change, but a better user interface was found necessary, essentially because the relevance of Dimensional Generator to reactor physics was initially underestimated. It is briefly described. 3. The use of Dimensional Generator helps make reactor-physics source code somewhat simpler. That is explained here with brief examples from BURFEL-PC and WIMSBURF. 4. Most importantly, with the help of Dimensional Generator, all erroneous physical expressions were automatically detected. The errors are detailed here (in spite of the author's embarrassment) because they show clearly, both in theory and in practice, how Dimensional Generator offers quality enhancement of reactor-physics programming. (authors)

  18. One- and two-dimensional sublattices as preconditions for high-Tc superconductivity

    International Nuclear Information System (INIS)

    Krueger, E.

    1989-01-01

    In an earlier paper it was proposed describing superconductivity in the framework of a nonadiabatic Heisenberg model in order to interprete the outstanding symmetry proper ties of the (spin-dependent) Wannier functions in the conduction bands of superconductors. This new group-theoretical model suggests that Cooper pair formation can only be mediated by boson excitations carrying crystal-spin-angular momentum. While in the three-dimensionally isotropic lattices of the standard superconductors phonons are able to transport crystal-spin-angular momentum, this is not true for phonons propagating through the one- or two-dimensional Cu-O sublattices of the high-T c compounds. Therefore, if such an anisotropic material is superconducting, it is necessarily higher-energetic excitations (of well-defined symmetry) which mediate pair formation. This fact is proposed being responsible for the high transition temperatures of these compounds. (author)

  19. Experiments on melting in classical and quantum two dimensional electron systems

    International Nuclear Information System (INIS)

    Williams, F.I.B.

    1991-01-01

    ''Two dimensional electron system'' (2DES) here refers to electrons whose dynamics is free in 2 dimensions but blocked in the third. Experiments have been performed in two limiting situations: the classical, low density, limit realised by electrons deposited on a liquid helium surface and the quantum, high density, limit realised by electrons at an interface between two epitaxially matched semiconductors. In the classical system, where T Q c so that the thermodynamic state is determined by the competition between the temperature and the Coulomb interaction, melting is induced either by raising the temperature at constant density or by lowering the density at finite temperature. In the quantum system, it is not possible to lower the density below about 100n W without the Coulomb interaction losing out to the random field representing the extrinsic disorder imposed by the semiconductor host. Instead one has to induce crystallisation with the help of the Lorentz force, by applying a perpendicular magnetic field B [2] . As the quantum magnetic length l c = (Planck constant c/eB) 1/2 is reduced with respect to the interelectronic spacing a, expressed by the filling factor ν 2l c 2 /a 2 , the system exhibits the quantum Hall effect (QHE), first for integer then for fractional values of ν. The fractional quantum Hall effect (FQHE) is a result of Coulomb induced correlation in the quantum liquid, but as ν is decreased still further the correlations are expected to take on long-range crystal-like periodicity accompanied by elastic shear rigidity. Such a state can nonetheless be destroyed by the disordering effect of temperature, giving rise to a phase boundary in a (T, B) plane. The aim of experiment is first to determine the phase diagram and then to help elucidate the mechanism of the melting. (author)

  20. High-dimensional quantum key distribution based on multicore fiber using silicon photonic integrated circuits

    DEFF Research Database (Denmark)

    Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld

    2017-01-01

    is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...

  1. Two-Dimensional High Definition Versus Three-Dimensional Endoscopy in Endonasal Skull Base Surgery: A Comparative Preclinical Study.

    Science.gov (United States)

    Rampinelli, Vittorio; Doglietto, Francesco; Mattavelli, Davide; Qiu, Jimmy; Raffetti, Elena; Schreiber, Alberto; Villaret, Andrea Bolzoni; Kucharczyk, Walter; Donato, Francesco; Fontanella, Marco Maria; Nicolai, Piero

    2017-09-01

    Three-dimensional (3D) endoscopy has been recently introduced in endonasal skull base surgery. Only a relatively limited number of studies have compared it to 2-dimensional, high definition technology. The objective was to compare, in a preclinical setting for endonasal endoscopic surgery, the surgical maneuverability of 2-dimensional, high definition and 3D endoscopy. A group of 68 volunteers, novice and experienced surgeons, were asked to perform 2 tasks, namely simulating grasping and dissection surgical maneuvers, in a model of the nasal cavities. Time to complete the tasks was recorded. A questionnaire to investigate subjective feelings during tasks was filled by each participant. In 25 subjects, the surgeons' movements were continuously tracked by a magnetic-based neuronavigator coupled with dedicated software (ApproachViewer, part of GTx-UHN) and the recorded trajectories were analyzed by comparing jitter, sum of square differences, and funnel index. Total execution time was significantly lower with 3D technology (P < 0.05) in beginners and experts. Questionnaires showed that beginners preferred 3D endoscopy more frequently than experts. A minority (14%) of beginners experienced discomfort with 3D endoscopy. Analysis of jitter showed a trend toward increased effectiveness of surgical maneuvers with 3D endoscopy. Sum of square differences and funnel index analyses documented better values with 3D endoscopy in experts. In a preclinical setting for endonasal skull base surgery, 3D technology appears to confer an advantage in terms of time of execution and precision of surgical maneuvers. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Linking experiment and theory for three-dimensional networked binary metal nanoparticle–triblock terpolymer superstructures

    KAUST Repository

    Li, Zihui; Hur, Kahyun; Sai, Hiroaki; Higuchi, Takeshi; Takahara, Atsushi; Jinnai, Hiroshi; Gruner, Sol M.; Wiesner, Ulrich

    2014-01-01

    the intimate coupling of synthesis, in-depth electron tomographic characterization and theory enables exquisite control of superstructure in highly ordered porous three-dimensional continuous networks from single and binary mixtures of metal nanoparticles

  3. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  4. Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement.

    Science.gov (United States)

    Lin, Hui; Gao, Jian; Mei, Qing; He, Yunbo; Liu, Junxiu; Wang, Xingjin

    2016-04-04

    It is a challenge for any optical method to measure objects with a large range of reflectivity variation across the surface. Image saturation results in incorrect intensities in captured fringe pattern images, leading to phase and measurement errors. This paper presents a new adaptive digital fringe projection technique which avoids image saturation and has a high signal to noise ratio (SNR) in the three-dimensional (3-D) shape measurement of objects that has a large range of reflectivity variation across the surface. Compared to previous high dynamic range 3-D scan methods using many exposures and fringe pattern projections, which consumes a lot of time, the proposed technique uses only two preliminary steps of fringe pattern projection and image capture to generate the adapted fringe patterns, by adaptively adjusting the pixel-wise intensity of the projected fringe patterns based on the saturated pixels in the captured images of the surface being measured. For the bright regions due to high surface reflectivity and high illumination by the ambient light and surfaces interreflections, the projected intensity is reduced just to be low enough to avoid image saturation. Simultaneously, the maximum intensity of 255 is used for those dark regions with low surface reflectivity to maintain high SNR. Our experiments demonstrate that the proposed technique can achieve higher 3-D measurement accuracy across a surface with a large range of reflectivity variation.

  5. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  6. The role of three-dimensional high-definition laparoscopic surgery for gynaecology.

    Science.gov (United States)

    Usta, Taner A; Gundogdu, Elif C

    2015-08-01

    This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.

  7. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  8. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  9. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  10. Experiment and modeling of paired effect on evacuation from a three-dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Jun, Hu [MOE Key Laboratory for Urban Transportation Complex Systems Theory and Technology, Beijing Jiaotong University, Beijing 100044 (China); School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044 (China); Faculty of Computer Science, Chengdu Normal University, Chengdu 611130 (China); Huijun, Sun, E-mail: hjsun1@bjtu.edu.cn [MOE Key Laboratory for Urban Transportation Complex Systems Theory and Technology, Beijing Jiaotong University, Beijing 100044 (China); School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044 (China); Juan, Wei [Faculty of Computer Science, Chengdu Normal University, Chengdu 611130 (China); Xiaodan, Chen [College of Information Science and Technology, Chengdu University, Chengdu 610106 (China); Lei, You [Faculty of Computer Science, Chengdu Normal University, Chengdu 611130 (China); College of Information Science and Technology, Chengdu University, Chengdu 610106 (China); Musong, Gu [Faculty of Computer Science, Chengdu Normal University, Chengdu 611130 (China)

    2014-10-24

    A novel three-dimensional cellular automata evacuation model was proposed based on stairs factor for paired effect and variety velocities in pedestrian evacuation. In the model pedestrians' moving probability of target position at the next moment was defined based on distance profit and repulsive force profit, and evacuation strategy was elaborated in detail through analyzing variety velocities and repulsive phenomenon in moving process. At last, experiments with the simulation platform were conducted to study the relationships of evacuation time, average velocity and pedestrian velocity. The results showed that when the ratio of single pedestrian was higher in the system, the shortest route strategy was good for improving evacuation efficiency; in turn, if ratio of paired pedestrians was higher, it is good for improving evacuation efficiency to adopt strategy that avoided conflicts, and priority should be given to scattered evacuation. - Highlights: • A novel three-dimensional evacuation model was presented with stair factor. • The paired effect and variety velocities were considered in evacuation model. • The cellular automata model is improved by repulsive force.

  11. Experiment and modeling of paired effect on evacuation from a three-dimensional space

    International Nuclear Information System (INIS)

    Jun, Hu; Huijun, Sun; Juan, Wei; Xiaodan, Chen; Lei, You; Musong, Gu

    2014-01-01

    A novel three-dimensional cellular automata evacuation model was proposed based on stairs factor for paired effect and variety velocities in pedestrian evacuation. In the model pedestrians' moving probability of target position at the next moment was defined based on distance profit and repulsive force profit, and evacuation strategy was elaborated in detail through analyzing variety velocities and repulsive phenomenon in moving process. At last, experiments with the simulation platform were conducted to study the relationships of evacuation time, average velocity and pedestrian velocity. The results showed that when the ratio of single pedestrian was higher in the system, the shortest route strategy was good for improving evacuation efficiency; in turn, if ratio of paired pedestrians was higher, it is good for improving evacuation efficiency to adopt strategy that avoided conflicts, and priority should be given to scattered evacuation. - Highlights: • A novel three-dimensional evacuation model was presented with stair factor. • The paired effect and variety velocities were considered in evacuation model. • The cellular automata model is improved by repulsive force

  12. Kinetically Controlled Synthesis of Pt-Based One-Dimensional Hierarchically Porous Nanostructures with Large Mesopores as Highly Efficient ORR Catalysts.

    Science.gov (United States)

    Fu, Shaofang; Zhu, Chengzhou; Song, Junhua; Engelhard, Mark H; Xia, Haibing; Du, Dan; Lin, Yuehe

    2016-12-28

    Rational design and construction of Pt-based porous nanostructures with large mesopores have triggered significant considerations because of their high surface area and more efficient mass transport. Hydrochloric acid-induced kinetically controlled reduction of metal precursors in the presence of soft template F-127 and hard template tellurium nanowires has been successfully demonstrated to construct one-dimensional hierarchical porous PtCu alloy nanostructures with large mesopores. Moreover, the electrochemical experiments demonstrated that the PtCu hierarchically porous nanostructures synthesized under optimized conditions exhibit enhanced electrocatalytic performance for oxygen reduction reaction in acid media.

  13. Kinetically Controlled Synthesis of Pt-Based One-Dimensional Hierarchically Porous Nanostructures with Large Mesopores as Highly Efficient ORR Catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Shaofang; Zhu, Chengzhou; Song, Junhua; Engelhard, Mark H.; Xia, Haibing; Du, Dan; Lin, Yuehe

    2016-12-28

    Rational design and construction of Pt-based porous nanostructures with large mesopores have triggered significant considerations because of their high surface area and more efficient mass transport. Hydrochloric acid-induced kinetic reduction of metal precursors in the presence of soft template F-127 and hard template tellurium nanowires has been successfully demonstrated to construct one-dimensional hierarchical porous PtCu alloy nanostructures with large mesopores. Moreover, the electrochemical experiments demonstrated that the resultant PtCu hierarchically porous nanostructures with optimized composition exhibit enhanced electrocatalytic performance for oxygen reduction reaction.

  14. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    Science.gov (United States)

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  15. [Extraction of buildings three-dimensional information from high-resolution satellite imagery based on Barista software].

    Science.gov (United States)

    Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi

    2010-05-01

    The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.

  16. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Tétreault, Nicolas

    2011-11-09

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  17. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Té treault, Nicolas; Arsenault, É ric; Heiniger, Leo-Philipp; Soheilnia, Navid; Brillet, Jé ré mie; Moehl, Thomas; Zakeeruddin, Shaik; Ozin, Geoffrey A.; Grä tzel, Michael

    2011-01-01

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  18. High-resolution three-dimensional mapping of semiconductor dopant potentials

    DEFF Research Database (Denmark)

    Twitchett, AC; Yates, TJV; Newcomb, SB

    2007-01-01

    Semiconductor device structures are becoming increasingly three-dimensional at the nanometer scale. A key issue that must be addressed to enable future device development is the three-dimensional mapping of dopant distributions, ideally under "working conditions". Here we demonstrate how a combin......Semiconductor device structures are becoming increasingly three-dimensional at the nanometer scale. A key issue that must be addressed to enable future device development is the three-dimensional mapping of dopant distributions, ideally under "working conditions". Here we demonstrate how...... a combination of electron holography and electron tomography can be used to determine quantitatively the three-dimensional electrostatic potential in an electrically biased semiconductor device with nanometer spatial resolution....

  19. Applications of one-dimensional or two-dimensional nuclear magnetic resonance to the structural and conformational study of oligosaccharides. Design and adjustment of new techniques

    International Nuclear Information System (INIS)

    Berthault, Patrick

    1988-01-01

    Oligosaccharides are natural compounds of huge importance as they intervene in all metabolic processes of cell life. Before the determination of structure-activity relationships, a precise knowledge of their chemical nature is therefore required. Thus, this research thesis aims at describing various experiments of high resolution nuclear magnetic resonance (NMR), and at demonstrating their applications on four oligosaccharides. After a brief description of NMR principles by using a conventional description and also a formalism derived from quantum mechanics, the author outlines the weaknesses of old NMR techniques, and introduces new techniques by using scalar couplings, by processing magnetization transfers with one-dimensional hetero-nuclear experiments. General principles of two-dimensional experiments are then presented and developed in terms of simple correlations, multiple correlations, correlations via double quantum coherencies. Experiments with light water are then described, and different experiments are performed to determine the structure and conformation of each unit. Bipolar interactions are then addressed to highlight proximities between atoms [fr

  20. Three-dimensionality of field-induced magnetism in a high-temperature superconductor

    DEFF Research Database (Denmark)

    Lake, B.; Lefmann, K.; Christensen, N.B.

    2005-01-01

    Many physical properties of high-temperature superconductors are two-dimensional phenomena derived from their square-planar CuO(2) building blocks. This is especially true of the magnetism from the copper ions. As mobile charge carriers enter the CuO(2) layers, the antiferromagnetism of the parent...

  1. Pricing and hedging high-dimensional American options : an irregular grid approach

    NARCIS (Netherlands)

    Berridge, S.; Schumacher, H.

    2002-01-01

    We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  2. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  3. Metallic and highly conducting two-dimensional atomic arrays of sulfur enabled by molybdenum disulfide nanotemplate

    Science.gov (United States)

    Zhu, Shuze; Geng, Xiumei; Han, Yang; Benamara, Mourad; Chen, Liao; Li, Jingxiao; Bilgin, Ismail; Zhu, Hongli

    2017-10-01

    Element sulfur in nature is an insulating solid. While it has been tested that one-dimensional sulfur chain is metallic and conducting, the investigation on two-dimensional sulfur remains elusive. We report that molybdenum disulfide layers are able to serve as the nanotemplate to facilitate the formation of two-dimensional sulfur. Density functional theory calculations suggest that confined in-between layers of molybdenum disulfide, sulfur atoms are able to form two-dimensional triangular arrays that are highly metallic. As a result, these arrays contribute to the high conductivity and metallic phase of the hybrid structures of molybdenum disulfide layers and two-dimensional sulfur arrays. The experimentally measured conductivity of such hybrid structures reaches up to 223 S/m. Multiple experimental results, including X-ray photoelectron spectroscopy (XPS), transition electron microscope (TEM), selected area electron diffraction (SAED), agree with the computational insights. Due to the excellent conductivity, the current density is linearly proportional to the scan rate until 30,000 mV s-1 without the attendance of conductive additives. Using such hybrid structures as electrode, the two-electrode supercapacitor cells yield a power density of 106 Wh kg-1 and energy density 47.5 Wh kg-1 in ionic liquid electrolytes. Our findings offer new insights into using two-dimensional materials and their Van der Waals heterostructures as nanotemplates to pattern foreign atoms for unprecedented material properties.

  4. Evaluation of viewing experiences induced by a curved three-dimensional display

    Science.gov (United States)

    Mun, Sungchul; Park, Min-Chul; Yano, Sumio

    2015-10-01

    Despite an increased need for three-dimensional (3-D) functionality in curved displays, comparisons pertinent to human factors between curved and flat panel 3-D displays have rarely been tested. This study compared stereoscopic 3-D viewing experiences induced by a curved display with those of a flat panel display by evaluating subjective and objective measures. Twenty-four participants took part in the experiments and viewed 3-D content with two different displays (flat and curved 3-D display) within a counterbalanced and within-subject design. For the 30-min viewing condition, a paired t-test showed significantly reduced P300 amplitudes, which were caused by engagement rather than cognitive fatigue, in the curved 3-D viewing condition compared to the flat 3-D viewing condition at P3 and P4. No significant differences in P300 amplitudes were observed for 60-min viewing. Subjective ratings of realness and engagement were also significantly higher in the curved 3-D viewing condition than in the flat 3-D viewing condition for 30-min viewing. Our findings support that curved 3-D displays can be effective for enhancing engagement among viewers based on specific viewing times and environments.

  5. The Episodic Nature of Experience: A Dynamical Systems Analysis.

    Science.gov (United States)

    Sreekumar, Vishnu; Dennis, Simon; Doxas, Isidoros

    2017-07-01

    Context is an important construct in many domains of cognition, including learning, memory, and emotion. We used dynamical systems methods to demonstrate the episodic nature of experience by showing a natural separation between the scales over which within-context and between-context relationships operate. To do this, we represented an individual's emails extending over about 5 years in a high-dimensional semantic space and computed the dimensionalities of the subspaces occupied by these emails. Personal discourse has a two-scaled geometry with smaller within-context dimensionalities than between-context dimensionalities. Prior studies have shown that reading experience (Doxas, Dennis, & Oliver, 2010) and visual experience (Sreekumar, Dennis, Doxas, Zhuang, & Belkin, 2014) have a similar two-scaled structure. Furthermore, the recurrence plot of the emails revealed that experience is predictable and hierarchical, supporting the constructs of some influential theories of memory. The results demonstrate that experience is not scale-free and provide an important target for accounts of how experience shapes cognition. Copyright © 2016 Cognitive Science Society, Inc.

  6. Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor.

    Science.gov (United States)

    Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng

    2016-09-15

    In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor.

  7. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    Science.gov (United States)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  8. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    Directory of Open Access Journals (Sweden)

    Thenmozhi Srinivasan

    2015-01-01

    Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.

  9. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    Science.gov (United States)

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Science.gov (United States)

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  11. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Directory of Open Access Journals (Sweden)

    Wei Ji Ma

    Full Text Available Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness, one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  12. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    OpenAIRE

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data)...

  13. Effects of SiNx on two-dimensional electron gas and current collapse of AlGaN/GaN high electron mobility transistors

    International Nuclear Information System (INIS)

    Fan, Ren; Zhi-Biao, Hao; Lei, Wang; Lai, Wang; Hong-Tao, Li; Yi, Luo

    2010-01-01

    SiN x is commonly used as a passivation material for AlGaN/GaN high electron mobility transistors (HEMTs). In this paper, the effects of SiN x passivation film on both two-dimensional electron gas characteristics and current collapse of AlGaN/GaN HEMTs are investigated. The SiN x films are deposited by high- and low-frequency plasma-enhanced chemical vapour deposition, and they display different strains on the AlGaN/GaN heterostructure, which can explain the experiment results. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  14. High-velocity two-phase flow two-dimensional modeling

    International Nuclear Information System (INIS)

    Mathes, R.; Alemany, A.; Thilbault, J.P.

    1995-01-01

    The two-phase flow in the nozzle of a LMMHD (liquid metal magnetohydrodynamic) converter has been studied numerically and experimentally. A two-dimensional model for two-phase flow has been developed including the viscous terms (dragging and turbulence) and the interfacial mass, momentum and energy transfer between the phases. The numerical results were obtained by a finite volume method based on the SIMPLE algorithm. They have been verified by an experimental facility using air-water as a simulation pair and a phase Doppler particle analyzer for velocity and droplet size measurement. The numerical simulation of a lithium-cesium high-temperature pair showed that a nearly homogeneous and isothermal expansion of the two phases is possible with small pressure losses and high kinetic efficiencies. In the throat region a careful profiling is necessary to reduce the inertial effects on the liquid velocity field

  15. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  16. High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection

    Science.gov (United States)

    Zuo, Chao; Chen, Qian; Gu, Guohua; Feng, Shijie; Feng, Fangxiaoyu; Li, Rubin; Shen, Guochen

    2013-08-01

    This paper introduces a high-speed three-dimensional (3-D) shape measurement technique for dynamic scenes by using bi-frequency tripolar pulse-width-modulation (TPWM) fringe projection. Two wrapped phase maps with different wavelengths can be obtained simultaneously by our bi-frequency phase-shifting algorithm. Then the two phase maps are unwrapped using a simple look-up-table based number-theoretical approach. To guarantee the robustness of phase unwrapping as well as the high sinusoidality of projected patterns, TPWM technique is employed to generate ideal fringe patterns with slight defocus. We detailed our technique, including its principle, pattern design, and system setup. Several experiments on dynamic scenes were performed, verifying that our method can achieve a speed of 1250 frames per second for fast, dense, and accurate 3-D measurements.

  17. Optimizing separations in online comprehensive two‐dimensional liquid chromatography

    Science.gov (United States)

    Gargano, Andrea F.G.; Schoenmakers, Peter J.

    2017-01-01

    Abstract Online comprehensive two‐dimensional liquid chromatography has become an attractive option for the analysis of complex nonvolatile samples found in various fields (e.g. environmental studies, food, life, and polymer sciences). Two‐dimensional liquid chromatography complements the highly popular hyphenated systems that combine liquid chromatography with mass spectrometry. Two‐dimensional liquid chromatography is also applied to the analysis of samples that are not compatible with mass spectrometry (e.g. high‐molecular‐weight polymers), providing important information on the distribution of the sample components along chemical dimensions (molecular weight, charge, lipophilicity, stereochemistry, etc.). Also, in comparison with conventional one‐dimensional liquid chromatography, two‐dimensional liquid chromatography provides a greater separation power (peak capacity). Because of the additional selectivity and higher peak capacity, the combination of two‐dimensional liquid chromatography with mass spectrometry allows for simpler mixtures of compounds to be introduced in the ion source at any given time, improving quantitative analysis by reducing matrix effects. In this review, we summarize the rationale and principles of two‐dimensional liquid chromatography experiments, describe advantages and disadvantages of combining different selectivities and discuss strategies to improve the quality of two‐dimensional liquid chromatography separations. PMID:29027363

  18. Dissecting high-dimensional phenotypes with bayesian sparse factor analysis of genetic covariance matrices.

    Science.gov (United States)

    Runcie, Daniel E; Mukherjee, Sayan

    2013-07-01

    Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.

  19. On-chip generation of high-dimensional entangled quantum states and their coherent control.

    Science.gov (United States)

    Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto

    2017-06-28

    Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.

  20. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    Science.gov (United States)

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R

  1. Some aspects of the applications of wire chambers in high energy physics experiments at large accelerators

    International Nuclear Information System (INIS)

    Turala, M.

    1982-01-01

    An application of proportional and drift chambers in four large spectrometers at the accelerators of IHEP Serpukhov and CERN Geneva is described. An operation of wire chambers at high intensities and high multiplicities of particles is discussed. The results of investigations of their efficiencies, spatial resolution (for one and two-dimensional readout) and long term stability are presented. Problems of preselection of a given class of events are discussed. The systems for preselection of defined multiplicities or a scattering angle of particles, in which proportional chambers have been used, are described and the results of their application in the real experiments are presented. (author)

  2. Image Making in Two Dimensional Art; Experiences with Straw and ...

    African Journals Online (AJOL)

    Image making in art is professionally referred to as bust in Sculpture andPortraiture in Painting. ... havebeen used to achieve these forms of art; like clay cement, marble, stone,different metals and, fibre glass in the three dimensional form; We also have Pencil, Charcoal Pastel and, Acrylic oil-paint in two dimensional form.

  3. Image Making in Two Dimensional Art; Experiences with Straw and ...

    African Journals Online (AJOL)

    Image making in art is professionally referred to as bust in Sculpture andPortraiture in Painting. It is an art form executed in three dimensional (3D)and two dimensional (2D) formats respectively. Uncountable materials havebeen used to achieve these forms of art; like clay cement, marble, stone,different metals and, fibre ...

  4. Automated Search for new Quantum Experiments.

    Science.gov (United States)

    Krenn, Mario; Malik, Mehul; Fickler, Robert; Lapkiewicz, Radek; Zeilinger, Anton

    2016-03-04

    Quantum mechanics predicts a number of, at first sight, counterintuitive phenomena. It therefore remains a question whether our intuition is the best way to find new experiments. Here, we report the development of the computer algorithm Melvin which is able to find new experimental implementations for the creation and manipulation of complex quantum states. Indeed, the discovered experiments extensively use unfamiliar and asymmetric techniques which are challenging to understand intuitively. The results range from the first implementation of a high-dimensional Greenberger-Horne-Zeilinger state, to a vast variety of experiments for asymmetrically entangled quantum states-a feature that can only exist when both the number of involved parties and dimensions is larger than 2. Additionally, new types of high-dimensional transformations are found that perform cyclic operations. Melvin autonomously learns from solutions for simpler systems, which significantly speeds up the discovery rate of more complex experiments. The ability to automate the design of a quantum experiment can be applied to many quantum systems and allows the physical realization of quantum states previously thought of only on paper.

  5. Assessing the detectability of antioxidants in two-dimensional high-performance liquid chromatography.

    Science.gov (United States)

    Bassanese, Danielle N; Conlan, Xavier A; Barnett, Neil W; Stevenson, Paul G

    2015-05-01

    This paper explores the analytical figures of merit of two-dimensional high-performance liquid chromatography for the separation of antioxidant standards. The cumulative two-dimensional high-performance liquid chromatography peak area was calculated for 11 antioxidants by two different methods--the areas reported by the control software and by fitting the data with a Gaussian model; these methods were evaluated for precision and sensitivity. Both methods demonstrated excellent precision in regards to retention time in the second dimension (%RSD below 1.16%) and cumulative second dimension peak area (%RSD below 3.73% from the instrument software and 5.87% for the Gaussian method). Combining areas reported by the high-performance liquid chromatographic control software displayed superior limits of detection, in the order of 1 × 10(-6) M, almost an order of magnitude lower than the Gaussian method for some analytes. The introduction of the countergradient eliminated the strong solvent mismatch between dimensions, leading to a much improved peak shape and better detection limits for quantification. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems

    Directory of Open Access Journals (Sweden)

    DimitrisG. Stavrakoudis

    2012-04-01

    Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.

  7. Preliminary three-dimensional potential flow simulation of a five-liter flask air injection experiment

    International Nuclear Information System (INIS)

    Davis, J.E.

    1977-01-01

    The preliminary results of an unsteady three-dimensional potential flow analysis of a five-liter flask air injection experiment (small-scale model simulation of a nuclear reactor steam condensation system) are presented. The location and velocity of the free water surface in the flask as a function of time are determined during pipe venting and bubble expansion processes. The analyses were performed using an extended version of the NASA-Ames Three-Dimensional Potential Flow Analysis System (POTFAN), which uses the vortex lattice singularity method of potential flow analysis. The pressure boundary condition at the free water surface and the boundary condition along the free jet boundary near the pipe exit were ignored for the purposes of the present study. The results of the analysis indicate that large time steps can be taken without significantly reducing the accuracy of the solutions and that the assumption of inviscid flow should not have an appreciable effect on the geometry and velocity of the free water surface. In addition, the computation time required for the solutions was well within acceptable limits

  8. The validation and assessment of machine learning: a game of prediction from high-dimensional data

    DEFF Research Database (Denmark)

    Pers, Tune Hannes; Albrechtsen, A; Holst, C

    2009-01-01

    In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....

  9. Quantum secret sharing based on modulated high-dimensional time-bin entanglement

    International Nuclear Information System (INIS)

    Takesue, Hiroki; Inoue, Kyo

    2006-01-01

    We propose a scheme for quantum secret sharing (QSS) that uses a modulated high-dimensional time-bin entanglement. By modulating the relative phase randomly by {0,π}, a sender with the entanglement source can randomly change the sign of the correlation of the measurement outcomes obtained by two distant recipients. The two recipients must cooperate if they are to obtain the sign of the correlation, which is used as a secret key. We show that our scheme is secure against intercept-and-resend (IR) and beam splitting attacks by an outside eavesdropper thanks to the nonorthogonality of high-dimensional time-bin entangled states. We also show that a cheating attempt based on an IR attack by one of the recipients can be detected by changing the dimension of the time-bin entanglement randomly and inserting two 'vacant' slots between the packets. Then, cheating attempts can be detected by monitoring the count rate in the vacant slots. The proposed scheme has better experimental feasibility than previously proposed entanglement-based QSS schemes

  10. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  11. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    Science.gov (United States)

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  12. AucPR: an AUC-based approach using penalized regression for disease prediction with high-dimensional omics data.

    Science.gov (United States)

    Yu, Wenbao; Park, Taesung

    2014-01-01

    It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.

  13. Three-dimensional graphene/polyaniline composite material for high-performance supercapacitor applications

    International Nuclear Information System (INIS)

    Liu, Huili; Wang, Yi; Gou, Xinglong; Qi, Tao; Yang, Jun; Ding, Yulong

    2013-01-01

    Highlights: ► A novel 3D graphene showed high specific surface area and large mesopore volume. ► Aniline monomer was polymerized in the presence of 3D graphene at room temperature. ► The supercapacitive properties were studied by CV and charge–discharge tests. ► The composite show a high gravimetric capacitance and good cyclic stability. ► The 3D graphene/polyaniline has never been report before our work. -- Abstract: A novel three-dimensional (3D) graphene/polyaniline nanocomposite material which is synthesized using in situ polymerization of aniline monomer on the graphene surface is reported as an electrode for supercapacitors. The morphology and structure of the material are characterized by scanning electron microscopy (SEM), transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD). The electrochemical properties of the resulting materials are systematically studied using cyclic voltammetry (CV) and constant current charge–discharge tests. A high gravimetric capacitance of 463 F g −1 at a scan rate of 1 mV s −1 is obtained by means of CVs with 3 mol L −1 KOH as the electrolyte. In addition, the composite material shows only 9.4% capacity loss after 500 cycles, indicating better cyclic stability for supercapacitor applications. The high specific surface area, large mesopore volume and three-dimensional nanoporous structure of 3D graphene could contribute to the high specific capacitance and good cyclic life

  14. Kernel Based Nonlinear Dimensionality Reduction and Classification for Genomic Microarray

    Directory of Open Access Journals (Sweden)

    Lan Shu

    2008-07-01

    Full Text Available Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research because they enable massively-parallel assays and simultaneous monitoring of thousands of gene expression of biological samples. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding(LLE is proposed, and fuzzy K-nearest neighbors algorithm which denoises datasets will be introduced as a replacement to the classical LLE’s KNN algorithm. In addition, kernel method based support vector machine (SVM will be used to classify genomic microarray data sets in this paper. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results confirm the superiority and high success rates of the presented method.

  15. Simulating three-dimensional nonthermal high-energy photon emission in colliding-wind binaries

    Energy Technology Data Exchange (ETDEWEB)

    Reitberger, K.; Kissmann, R.; Reimer, A.; Reimer, O., E-mail: klaus.reitberger@uibk.ac.at [Institut für Astro- und Teilchenphysik and Institut für Theoretische Physik, Leopold-Franzens-Universität Innsbruck, A-6020 Innsbruck (Austria)

    2014-07-01

    Massive stars in binary systems have long been regarded as potential sources of high-energy γ rays. The emission is principally thought to arise in the region where the stellar winds collide and accelerate relativistic particles which subsequently emit γ rays. On the basis of a three-dimensional distribution function of high-energy particles in the wind collision region—as obtained by a numerical hydrodynamics and particle transport model—we present the computation of the three-dimensional nonthermal photon emission for a given line of sight. Anisotropic inverse Compton emission is modeled using the target radiation field of both stars. Photons from relativistic bremsstrahlung and neutral pion decay are computed on the basis of local wind plasma densities. We also consider photon-photon opacity effects due to the dense radiation fields of the stars. Results are shown for different stellar separations of a given binary system comprising of a B star and a Wolf-Rayet star. The influence of orbital orientation with respect to the line of sight is also studied by using different orbital viewing angles. For the chosen electron-proton injection ratio of 10{sup –2}, we present the ensuing photon emission in terms of two-dimensional projections maps, spectral energy distributions, and integrated photon flux values in various energy bands. Here, we find a transition from hadron-dominated to lepton-dominated high-energy emission with increasing stellar separations. In addition, we confirm findings from previous analytic modeling that the spectral energy distribution varies significantly with orbital orientation.

  16. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus

    2013-11-12

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  17. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus; Dahmen, Wolfgang; DeVore, Ronald; Grasedyck, Lars

    2013-01-01

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called 'curse of dimensionality'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  18. Zero- and two-dimensional hybrid carbon phosphors for high colorimetric purity white light-emission.

    Science.gov (United States)

    Ding, Yamei; Chang, Qing; Xiu, Fei; Chen, Yingying; Liu, Zhengdong; Ban, Chaoyi; Cheng, Shuai; Liu, Juqing; Huang, Wei

    2018-03-01

    Carbon nanomaterials are promising phosphors for white light emission. A facile single-step synthesis method has been developed to prepare zero- and two-dimensional hybrid carbon phosphors for the first time. Zero-dimensional carbon dots (C-dots) emit bright blue luminescence under 365 nm UV light and two-dimensional nanoplates improve the dispersity and film forming ability of C-dots. As a proof-of-concept application, the as-prepared hybrid carbon phosphors emit bright white luminescence in the solid state, and the phosphor-coated blue LEDs exhibit high colorimetric purity white light-emission with a color coordinate of (0.3308, 0.3312), potentially enabling the successful application of white emitting phosphors in the LED field.

  19. Stable high efficiency two-dimensional perovskite solar cells via cesium doping

    KAUST Repository

    Zhang, Xu

    2017-08-15

    Two-dimensional (2D) organic-inorganic perovskites have recently emerged as one of the most important thin-film solar cell materials owing to their excellent environmental stability. The remaining major pitfall is their relatively poor photovoltaic performance in contrast to 3D perovskites. In this work we demonstrate cesium cation (Cs) doped 2D (BA)(MA)PbI perovskite solar cells giving a power conversion efficiency (PCE) as high as 13.7%, the highest among the reported 2D devices, with excellent humidity resistance. The enhanced efficiency from 12.3% (without Cs) to 13.7% (with 5% Cs) is attributed to perfectly controlled crystal orientation, an increased grain size of the 2D planes, superior surface quality, reduced trap-state density, enhanced charge-carrier mobility and charge-transfer kinetics. Surprisingly, it is found that the Cs doping yields superior stability for the 2D perovskite solar cells when subjected to a high humidity environment without encapsulation. The device doped using 5% Cs degrades only ca. 10% after 1400 hours of exposure in 30% relative humidity (RH), and exhibits significantly improved stability under heating and high moisture environments. Our results provide an important step toward air-stable and fully printable low dimensional perovskites as a next-generation renewable energy source.

  20. Estimating the effect of a variable in a high-dimensional regression model

    DEFF Research Database (Denmark)

    Jensen, Peter Sandholt; Wurtz, Allan

    assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider  properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...

  1. Two-dimensional computer simulation of high intensity proton beams

    CERN Document Server

    Lapostolle, Pierre M

    1972-01-01

    A computer program has been developed which simulates the two- dimensional transverse behaviour of a proton beam in a focusing channel. The model is represented by an assembly of a few thousand 'superparticles' acted upon by their own self-consistent electric field and an external focusing force. The evolution of the system is computed stepwise in time by successively solving Poisson's equation and Newton's law of motion. Fast Fourier transform techniques are used for speed in the solution of Poisson's equation, while extensive area weighting is utilized for the accurate evaluation of electric field components. A computer experiment has been performed on the CERN CDC 6600 computer to study the nonlinear behaviour of an intense beam in phase space, showing under certain circumstances a filamentation due to space charge and an apparent emittance growth. (14 refs).

  2. Importance of two-dimensional effects for the generation of ultra high pressures obtained in laser colliding foil experiments

    Energy Technology Data Exchange (ETDEWEB)

    Faral, B.; Fabbro, R. (Laboratoire d' Utilisation des Lasers Intenses, Ecole Polytechnique, 91128 Palaiseau Cedex, (France)); Virmont, J. (Laboratoire de Physique des Milieux Ionises, Ecole Polytechnique, 91128 Palaiseau Cedex, (France)); Cottet, F.; Romain, J.P. (Laboratoire d' Energetique et de Detonique, Ecole Nationale Superieure de Mecanique et d' Aerotechnique, 86034 Poitiers, (France)); Pepin, H. (Institut National de la Recherche Scientifique Energie, Montreal, (Canada))

    1990-02-01

    A 12 {mu}m polyester foil is accelerated by a 0.26 {mu}m wavelength laser and collides with a 15 {mu}m thick molybdenum foil. The accelerating pressure is 45 Mbar (laser intensity{approx}3-- 4{times}10{sup 14} W/cm{sup 2}) and gives to the polyester foil a velocity of about 160 km/sec. The measurement of the shock pressure induced in the impacted foil is made with an improved step technique. When the initial spacing between the two foils is too large compared to the focal spot radius, i.e., larger than 20--30 {mu}m, the different experimental results cannot be reproduced with one-dimensional simulations; this is only possible by using a two-dimensional Lagrangian code that has been developed and that takes into account the strong deformation of the accelerated foil. Finally, even with the low level of x-ray heating due to the ablation plasma, multihundred megabar pressures can be obtained within a very short time.

  3. Importance of two-dimensional effects for the generation of ultra high pressures obtained in laser colliding foil experiments

    International Nuclear Information System (INIS)

    Faral, B.; Fabbro, R.; Virmont, J.; Cottet, F.; Romain, J.P.; Pepin, H.

    1990-01-01

    A 12 μm polyester foil is accelerated by a 0.26 μm wavelength laser and collides with a 15 μm thick molybdenum foil. The accelerating pressure is 45 Mbar (laser intensity∼3-- 4x10 14 W/cm 2 ) and gives to the polyester foil a velocity of about 160 km/sec. The measurement of the shock pressure induced in the impacted foil is made with an improved step technique. When the initial spacing between the two foils is too large compared to the focal spot radius, i.e., larger than 20--30 μm, the different experimental results cannot be reproduced with one-dimensional simulations; this is only possible by using a two-dimensional Lagrangian code that has been developed and that takes into account the strong deformation of the accelerated foil. Finally, even with the low level of x-ray heating due to the ablation plasma, multihundred megabar pressures can be obtained within a very short time

  4. Two-dimensional Semiconductor-Superconductor Hybrids

    DEFF Research Database (Denmark)

    Suominen, Henri Juhani

    This thesis investigates hybrid two-dimensional semiconductor-superconductor (Sm-S) devices and presents a new material platform exhibiting intimate Sm-S coupling straight out of the box. Starting with the conventional approach, we investigate coupling superconductors to buried quantum well....... To overcome these issues we integrate the superconductor directly into the semiconducting material growth stack, depositing it in-situ in a molecular beam epitaxy system under high vacuum. We present a number of experiments on these hybrid heterostructures, demonstrating near unity interface transparency...

  5. High-dimensional chaos from self-sustained collisions of solitons

    Energy Technology Data Exchange (ETDEWEB)

    Yildirim, O. Ozgur, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Cavium, Inc., 600 Nickerson Rd., Marlborough, Massachusetts 01752 (United States); Ham, Donhee, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Harvard University, 33 Oxford St., Cambridge, Massachusetts 02138 (United States)

    2014-06-16

    We experimentally demonstrate chaos generation based on collisions of electrical solitons on a nonlinear transmission line. The nonlinear line creates solitons, and an amplifier connected to it provides gain to these solitons for their self-excitation and self-sustenance. Critically, the amplifier also provides a mechanism to enable and intensify collisions among solitons. These collisional interactions are of intrinsically nonlinear nature, modulating the phase and amplitude of solitons, thus causing chaos. This chaos generated by the exploitation of the nonlinear wave phenomena is inherently high-dimensional, which we also demonstrate.

  6. Preparation of three-dimensional graphene foam for high performance supercapacitors

    Directory of Open Access Journals (Sweden)

    Yunjie Ping

    2017-04-01

    Full Text Available Supercapacitor is a new type of energy-storage device, and has been attracted widely attentions. As a two dimensional (2D nanomaterials, graphene is considered to be a promising material of supercapacitor because of its excellent properties involving high electrical conductivity and large surface area. In this paper, the large-scale graphene is successfully fabricated via environmental-friendly electrochemical exfoliation of graphite, and then, the three dimensional (3D graphene foam is prepared by using nickel foam as template and FeCl3/HCl solution as etchant. Compared with the regular 2D graphene paper, the 3D graphene foam electrode shows better electrochemical performance, and exhibits the largest specific capacitance of approximately 128 F/g at the current density of 1 A/g in 6 M KOH electrolyte. It is expected that the 3D graphene foam will have a potential application in the supercapacitors.

  7. Extinction maps toward the Milky Way bulge: Two-dimensional and three-dimensional tests with apogee

    Energy Technology Data Exchange (ETDEWEB)

    Schultheis, M. [Université de Nice Sophia-Antipolis, CNRS, Observatoire de Côte d' Azur, Laboratoire Lagrange, 06304 Nice Cedex 4 (France); Zasowski, G. [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Allende Prieto, C. [Instituto de Astrofísica de Canarias, Calle Vía Láctea s/n, E-38205 La Laguna, Tenerife (Spain); Anders, F.; Chiappini, C. [Leibniz-Institut für Astrophysik Potsdam (AIP), D-14482 Potsdam (Germany); Beaton, R. L.; García Pérez, A. E.; Majewski, S. R. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904 (United States); Beers, T. C. [National Optical Astronomy Observatory, Tucson, AZ 85719 (United States); Bizyaev, D. [Apache Point Observatory, Sunspot, NM 88349 (United States); Frinchaboy, P. M. [Department of Physics and Astronomy, Texas Christian University, TCU Box 298840, Fort Worth, TX 76129 (United States); Ge, J. [Astronomy Department, University of Florida, Gainesville, FL 32611 (United States); Hearty, F.; Schneider, D. P. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Holtzman, J. [New Mexico State University, Las Cruces, NM 88003 (United States); Muna, D. [Department of Astronomy, The Ohio State University, Columbus, OH 43210 (United States); Nidever, D. [Department of Astronomy, University of Michigan, Ann Arbor, MI 48109 (United States); Shetrone, M., E-mail: mathias.schultheis@oca.eu, E-mail: gail.zasowski@gmail.com [McDonald Observatory, The University of Texas at Austin, Austin, TX 78712 (United States)

    2014-07-01

    Galactic interstellar extinction maps are powerful and necessary tools for Milky Way structure and stellar population analyses, particularly toward the heavily reddened bulge and in the midplane. However, due to the difficulty of obtaining reliable extinction measures and distances for a large number of stars that are independent of these maps, tests of their accuracy and systematics have been limited. Our goal is to assess a variety of photometric stellar extinction estimates, including both two-dimensional and three-dimensional extinction maps, using independent extinction measures based on a large spectroscopic sample of stars toward the Milky Way bulge. We employ stellar atmospheric parameters derived from high-resolution H-band Apache Point Observatory Galactic Evolution Experiment (APOGEE) spectra, combined with theoretical stellar isochrones, to calculate line-of-sight extinction and distances for a sample of more than 2400 giants toward the Milky Way bulge. We compare these extinction values to those predicted by individual near-IR and near+mid-IR stellar colors, two-dimensional bulge extinction maps, and three-dimensional extinction maps. The long baseline, near+mid-IR stellar colors are, on average, the most accurate predictors of the APOGEE extinction estimates, and the two-dimensional and three-dimensional extinction maps derived from different stellar populations along different sightlines show varying degrees of reliability. We present the results of all of the comparisons and discuss reasons for the observed discrepancies. We also demonstrate how the particular stellar atmospheric models adopted can have a strong impact on this type of analysis, and discuss related caveats.

  8. Extinction maps toward the Milky Way bulge: Two-dimensional and three-dimensional tests with apogee

    International Nuclear Information System (INIS)

    Schultheis, M.; Zasowski, G.; Allende Prieto, C.; Anders, F.; Chiappini, C.; Beaton, R. L.; García Pérez, A. E.; Majewski, S. R.; Beers, T. C.; Bizyaev, D.; Frinchaboy, P. M.; Ge, J.; Hearty, F.; Schneider, D. P.; Holtzman, J.; Muna, D.; Nidever, D.; Shetrone, M.

    2014-01-01

    Galactic interstellar extinction maps are powerful and necessary tools for Milky Way structure and stellar population analyses, particularly toward the heavily reddened bulge and in the midplane. However, due to the difficulty of obtaining reliable extinction measures and distances for a large number of stars that are independent of these maps, tests of their accuracy and systematics have been limited. Our goal is to assess a variety of photometric stellar extinction estimates, including both two-dimensional and three-dimensional extinction maps, using independent extinction measures based on a large spectroscopic sample of stars toward the Milky Way bulge. We employ stellar atmospheric parameters derived from high-resolution H-band Apache Point Observatory Galactic Evolution Experiment (APOGEE) spectra, combined with theoretical stellar isochrones, to calculate line-of-sight extinction and distances for a sample of more than 2400 giants toward the Milky Way bulge. We compare these extinction values to those predicted by individual near-IR and near+mid-IR stellar colors, two-dimensional bulge extinction maps, and three-dimensional extinction maps. The long baseline, near+mid-IR stellar colors are, on average, the most accurate predictors of the APOGEE extinction estimates, and the two-dimensional and three-dimensional extinction maps derived from different stellar populations along different sightlines show varying degrees of reliability. We present the results of all of the comparisons and discuss reasons for the observed discrepancies. We also demonstrate how the particular stellar atmospheric models adopted can have a strong impact on this type of analysis, and discuss related caveats.

  9. High-dimensional free-space optical communications based on orbital angular momentum coding

    Science.gov (United States)

    Zou, Li; Gu, Xiaofan; Wang, Le

    2018-03-01

    In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.

  10. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    Science.gov (United States)

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Three-Dimensional Porous Nitrogen-Doped NiO Nanostructures as Highly Sensitive NO2 Sensors

    Directory of Open Access Journals (Sweden)

    Van Hoang Luan

    2017-10-01

    Full Text Available Nickel oxide has been widely used in chemical sensing applications, because it has an excellent p-type semiconducting property with high chemical stability. Here, we present a novel technique of fabricating three-dimensional porous nitrogen-doped nickel oxide nanosheets as a highly sensitive NO2 sensor. The elaborate nanostructure was prepared by a simple and effective hydrothermal synthesis method. Subsequently, nitrogen doping was achieved by thermal treatment with ammonia gas. When the p-type dopant, i.e., nitrogen atoms, was introduced in the three-dimensional nanostructures, the nickel-oxide-nanosheet-based sensor showed considerable NO2 sensing ability with two-fold higher responsivity and sensitivity compared to non-doped nickel-oxide-based sensors.

  12. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    Science.gov (United States)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  13. High velocity impact experiment (HVIE)

    Energy Technology Data Exchange (ETDEWEB)

    Toor, A.; Donich, T.; Carter, P.

    1998-02-01

    The HVIE space project was conceived as a way to measure the absolute EOS for approximately 10 materials at pressures up to {approximately}30 Mb with order-of-magnitude higher accuracy than obtainable in any comparable experiment conducted on earth. The experiment configuration is such that each of the 10 materials interacts with all of the others thereby producing one-hundred independent, simultaneous EOS experiments The materials will be selected to provide critical information to weapons designers, National Ignition Facility target designers and planetary and geophysical scientists. In addition, HVIE will provide important scientific information to other communities, including the Ballistic Missile Defense Organization and the lethality and vulnerability community. The basic HVIE concept is to place two probes in counter rotating, highly elliptical orbits and collide them at high velocity (20 km/s) at 100 km altitude above the earth. The low altitude of the experiment will provide quick debris strip-out of orbit due to atmospheric drag. The preliminary conceptual evaluation of the HVIE has found no show stoppers. The design has been very easy to keep within the lift capabilities of commonly available rides to low earth orbit including the space shuttle. The cost of approximately 69 million dollars for 100 EOS experiment that will yield the much needed high accuracy, absolute measurement data is a bargain!

  14. The Yosemite Extreme Panoramic Imaging Project: Monitoring Rockfall in Yosemite Valley with High-Resolution, Three-Dimensional Imagery

    Science.gov (United States)

    Stock, G. M.; Hansen, E.; Downing, G.

    2008-12-01

    Yosemite Valley experiences numerous rockfalls each year, with over 600 rockfall events documented since 1850. However, monitoring rockfall activity has proved challenging without high-resolution "basemap" imagery of the Valley walls. The Yosemite Extreme Panoramic Imaging Project, a partnership between the National Park Service and xRez Studio, has created an unprecedented image of Yosemite Valley's walls by utilizing gigapixel panoramic photography, LiDAR-based digital terrain modeling, and three-dimensional computer rendering. Photographic capture was accomplished by 20 separate teams shooting from key overlapping locations throughout Yosemite Valley. The shots were taken simultaneously in order to ensure uniform lighting, with each team taking over 500 overlapping shots from each vantage point. Each team's shots were then assembled into 20 gigapixel panoramas. In addition, all 20 gigapixel panoramas were projected onto a 1 meter resolution digital terrain model in three-dimensional rendering software, unifying Yosemite Valley's walls into a vertical orthographic view. The resulting image reveals the geologic complexity of Yosemite Valley in high resolution and represents one of the world's largest photographic captures of a single area. Several rockfalls have already occurred since image capture, and repeat photography of these areas clearly delineates rockfall source areas and failure dynamics. Thus, the imagery has already proven to be a valuable tool for monitoring and understanding rockfall in Yosemite Valley. It also sets a new benchmark for the quality of information a photographic image, enabled with powerful new imaging technology, can provide for the earth sciences.

  15. Progress in high-dimensional percolation and random graphs

    CERN Document Server

    Heydenreich, Markus

    2017-01-01

    This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic.  The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation.  Part III, consist...

  16. The impact of real-time, Internet experiments versus interactive, asynchronous replays of experiments on high school students science concepts and attitudes

    Science.gov (United States)

    Kubasko, Dennis S., Jr.

    The purpose of this study was to investigate whether students' learning experiences were similar or different with an interactive, live connection via the Internet in real-time to an Atomic Force Microscope (AFM) versus a stored replay of AFM experiments. Did the two treatments influence students' attitudes towards the learning experience? Are there differences in students' understandings of viruses and science investigations? In addition, this study investigated treatment effects on students' understandings of the nature of science. The present study drew upon the research that examined students' attitudes toward science, students' views of the nature of science, instructional technology in education, and prior research on the nanoManipulator. Specific efforts have been made to address reform efforts in science education throughout the literature review. Eighty-five high school biology students participated in the nanoManipulator experience (44 males, 41 females, 64 Euro-American, 16 African-American, and 5 of other ethnicities). Two high school classes were randomly selected and administered the interactive, real-time treatment. Two different high school classes were randomly selected and administered the limited-interaction, experimental replay treatment. The intervention occurred over a one-week period. Qualitative and quantitative measures were used to examine the differences between two treatment conditions. Experiential, affective, cognitive, and the nature of science domains were assessed. Findings show that the questions and statements made in synchronous time by the live treatment group were significantly different than students' questions and statements in asynchronous communication. Students in the replay treatment made more statements about what they learned or knew about the experience than did students in the live experience. Students in both groups showed significant gains in understanding viruses (particularly viral dimensionality and shape

  17. Three-dimensional porous graphene-Co{sub 3}O{sub 4} nanocomposites for high performance photocatalysts

    Energy Technology Data Exchange (ETDEWEB)

    Bin, Zeng, E-mail: 21467855@qq.com [College of Mechanical Engineering, Hunan University of Arts and Science, Changde 415000 (China); Hui, Long [Department of Applied Physics and Materials Research Center, The Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong)

    2015-12-01

    Highlights: • The three-dimensional porous graphene-Co{sub 3}O{sub 4} nanocomposites were synthesized. • Excellent photocatalytic performance. • Separated from the reaction medium by magnetic decantation. - Abstract: Novel three-dimensional porous graphene-Co{sub 3}O{sub 4} nanocomposites were synthesized by freeze-drying methods. Scanning and transmission electron microscopy revealed that the graphene formed a three-dimensional porous structure with Co{sub 3}O{sub 4} nanoparticles decorated surfaces. The as-obtained product showed high photocatalytic efficiency and could be easily separated from the reaction medium by magnetic decantation. This nanocomposite may be expected to have potential in water purification applications.

  18. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    Science.gov (United States)

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  19. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2010-01-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii

  20. High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1984-01-01

    Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)

  1. Secondary instability and transition in three-dimensional boundary layers

    Energy Technology Data Exchange (ETDEWEB)

    Stolte, A.; Bertolotti, F.P.; Koch, W. (Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Goettingen (Germany). Inst. fuer Stroemungsmechanik)

    1999-01-01

    Stationary and traveling crossflow modes are the most prominent disturbances in the highly accelerated three-dimensional flow near the leading edge of a swept wing. Near transition onset, secondary three-dimensional instabilities of high frequency can be observed in such flows. A model flow on the basis of a DLR swept plate experiment allows a detailed study of transition scenarios triggered by crossflow instabilities, since the favorable pressure gradient over the whole plate inhibits instabilities of Tollmien-Schlichting type. In order to shed some light upon the role of the high-frequency secondary instabilities, the saturation characteristics of crossflow vortices in this model flow are investigated using the parabolized stability equations. In contrast to nonlinear equilibrium solutions of steady crossflow vortices, the nonlinear Polarized Stability Equations (PSE) results yield different maximal disturbance amplitudes for different initial amplitudes. (orig./AKF)

  2. Secondary instability and transition in three-dimensional boundary layers

    Energy Technology Data Exchange (ETDEWEB)

    Stolte, A.; Bertolotti, F.P.; Koch, W. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Goettingen (Germany). Inst. fuer Stroemungsmechanik

    1999-12-01

    Stationary and traveling crossflow modes are the most prominent disturbances in the highly accelerated three-dimensional flow near the leading edge of a swept wing. Near transition onset, secondary three-dimensional instabilities of high frequency can be observed in such flows. A model flow on the basis of a DLR swept plate experiment allows a detailed study of transition scenarios triggered by crossflow instabilities, since the favorable pressure gradient over the whole plate inhibits instabilities of Tollmien-Schlichting type. In order to shed some light upon the role of the high-frequency secondary instabilities, the saturation characteristics of crossflow vortices in this model flow are investigated using the parabolized stability equations. In contrast to nonlinear equilibrium solutions of steady crossflow vortices, the nonlinear Polarized Stability Equations (PSE) results yield different maximal disturbance amplitudes for different initial amplitudes. (orig./AKF)

  3. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    Science.gov (United States)

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  5. Hawking radiation of a high-dimensional rotating black hole

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)

    2010-01-15

    We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)

  6. Two-dimensional thermal simulations of aluminum and carbon ion strippers for experiments at SPIRAL2 using the highest beam intensities

    International Nuclear Information System (INIS)

    Tahir, N.A.; Kim, V.; Lamour, E.; Lomonosov, I.V.; Piriz, A.R.; Rozet, J.P.; Stöhlker, Th.; Sultanov, V.; Vernhet, D.

    2012-01-01

    In this paper we report on two-dimensional numerical simulations of heating of a rotating, wheel shaped target impacted by the full intensity of the ion beam that will be delivered by the SPIRAL2 facility at Caen, France. The purpose of this work is to study heating of solid targets that will be used to strip the fast ions of SPIRAL2 to the required high charge state for the FISIC (Fast Ion–Slow Ion Collision) experiments. Strippers of aluminum with different emissivities and of carbon are exposed to high beam current of different ion species as oxygen, neon and argon. These studies show that carbon, due to its much higher sublimation temperature and much higher emissivity, is more favorable compared to aluminum. For the highest beam intensities, an aluminum stripper does not survive. However, problem of the induced thermal stresses and long term material fatigue needs to be investigated before a final conclusion can be drawn.

  7. AucPR: An AUC-based approach using penalized regression for disease prediction with high-dimensional omics data

    OpenAIRE

    Yu, Wenbao; Park, Taesung

    2014-01-01

    Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...

  8. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  9. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  10. Subjective figure reversal in two- and three-dimensional perceptual space.

    Science.gov (United States)

    Radilová, J; Radil-Weiss, T

    1984-08-01

    A permanently illuminated pattern of Mach's truncated pyramid can be perceived according to the experimental instruction given, either as a three-dimensional reversible figure with spontaneously changing convex and concave interpretation (in one experiment), or as a two-dimensional reversible figure-ground pattern (in another experiment). The reversal rate was about twice as slow, without the subjects being aware of it, if it was perceived as a three-dimensional figure compared to the situation when it was perceived as two-dimensional. It may be hypothetized that in the three-dimensional case, the process of perception requires more sequential steps than in the two-dimensional one.

  11. Pd nanoparticles supported on three-dimensional graphene aerogels as highly efficient catalysts for methanol electrooxidation

    International Nuclear Information System (INIS)

    Liu, Mingrui; Peng, Cheng; Yang, Wenke; Guo, Jiaojiao; Zheng, Yixiong; Chen, Peiqin; Huang, Tingting; Xu, Jing

    2015-01-01

    Well-dispersed Pd nanoparticles supported on three-dimensional graphene aerogels (Pd/3DGA) were successfully prepared via a facile and efficient hydrothermal method without surfactant and template. The morphology and structure of the as-prepared Pd/3DGA nanocomposites were investigated by scanning electron microscopy (SEM) and X-ray diffraction (XRD). SEM showed that the Pd nanoparticles with a small average diameter and narrow size distribution were uniformly deposited on the surface of the self-assembled three-dimensional graphene aerogels. Raman spectra revealed the surface properties of 3DGA and its interaction with metallic nanoparticles. Cyclic voltammetric (CV) and chronoamperometric (CA) experiments further exhibited its superior catalytic activity and stability for the electro-oxidation of methanol in alkaline media, making it a promising anodic catalyst for direct alkaline alcohol fuel cells (DAAFCs).

  12. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  13. High-dimensional data: p >> n in mathematical statistics and bio-medical applications

    OpenAIRE

    Van De Geer, Sara A.; Van Houwelingen, Hans C.

    2004-01-01

    The workshop 'High-dimensional data: p >> n in mathematical statistics and bio-medical applications' was held at the Lorentz Center in Leiden from 9 to 20 September 2002. This special issue of Bernoulli contains a selection of papers presented at that workshop. ¶ The introduction of high-throughput micro-array technology to measure gene-expression levels and the publication of the pioneering paper by Golub et al. (1999) has brought to life a whole new branch of data analysis under the name of...

  14. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    Directory of Open Access Journals (Sweden)

    András Király

    2014-01-01

    Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.

  15. Analysis of Operating Performance and Three Dimensional Magnetic Field of High Voltage Induction Motors with Stator Chute

    Directory of Open Access Journals (Sweden)

    WANG Qing-shan

    2017-06-01

    Full Text Available In view of the difficulties on technology of rotor chute in high voltage induction motor,the desig method adopted stator chute structure is put forward. The mathematical model of three dimensional nonlinear transient field for solving stator chute in high voltage induction motor is set up. Through the three dimensional entity model of motor,three dimensional finite element method based on T,ψ - ψ electromagnetic potential is adopted for the analysis and calculation of stator chute in high voltage induction motor under rated condition. The distributions long axial of fundamental wave magnetic field and tooth harmonic wave magnetic field are analyzed after stator chute,and the weakening effects on main tooth harmonic magnetic field are researched. Further more,the comparison analysis of main performance parameters of chute and straight slot is carried out under rated condition. The results show that the electrical performance of stator chute is better than that of straight slot in high voltage induction motor,and the tooth harmonic has been sharply decreased

  16. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  17. Five-dimensional PPN formalism and experimental test of Kaluza-Klein theory

    International Nuclear Information System (INIS)

    Xu Peng; Ma Yongge

    2007-01-01

    The parametrized post-Newtonian formalism for 5-dimensional metric theories with a compact extra dimension is developed. The relation of the 5-dimensional and 4-dimensional formulations is then analyzed, in order to compare the higher dimensional theories of gravity with experiments. It turns out that the value of post-Newtonian parameter γ in the reduced 5-dimensional Kaluza-Klein theory is two times smaller than that in 4-dimensional general relativity. The departure is due to the existence of an extra dimension in the Kaluza-Klein theory. Thus the confrontation between the reduced 4-dimensional formalism and Solar system experiments raises a severe challenge to the classical Kaluza-Klein theory

  18. Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2009-01-01

    Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold

  19. Pure Cs4PbBr6: Highly Luminescent Zero-Dimensional Perovskite Solids

    KAUST Repository

    Saidaminov, Makhsud I.

    2016-09-26

    So-called zero-dimensional perovskites, such as Cs4PbBr6, promise outstanding emissive properties. However, Cs4PbBr6 is mostly prepared by melting of precursors that usually leads to a coformation of undesired phases. Here, we report a simple low-temperature solution-processed synthesis of pure Cs4PbBr6 with remarkable emission properties. We found that pure Cs4PbBr6 in solid form exhibits a 45% photoluminescence quantum yield (PLQY), in contrast to its three-dimensional counterpart, CsPbBr3, which exhibits more than 2 orders of magnitude lower PLQY. Such a PLQY of Cs4PbBr6 is significantly higher than that of other solid forms of lower-dimensional metal halide perovskite derivatives and perovskite nanocrystals. We attribute this dramatic increase in PL to the high exciton binding energy, which we estimate to be ∼353 meV, likely induced by the unique Bergerhoff–Schmitz–Dumont-type crystal structure of Cs4PbBr6, in which metal-halide-comprised octahedra are spatially confined. Our findings bring this class of perovskite derivatives to the forefront of color-converting and light-emitting applications.

  20. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input

  1. Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning

    Science.gov (United States)

    Sagun, Levent

    This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would

  2. Stable Graphene-Two-Dimensional Multiphase Perovskite Heterostructure Phototransistors with High Gain.

    Science.gov (United States)

    Shao, Yuchuan; Liu, Ye; Chen, Xiaolong; Chen, Chen; Sarpkaya, Ibrahim; Chen, Zhaolai; Fang, Yanjun; Kong, Jaemin; Watanabe, Kenji; Taniguchi, Takashi; Taylor, André; Huang, Jinsong; Xia, Fengnian

    2017-12-13

    Recently, two-dimensional (2D) organic-inorganic perovskites emerged as an alternative material for their three-dimensional (3D) counterparts in photovoltaic applications with improved moisture resistance. Here, we report a stable, high-gain phototransistor consisting of a monolayer graphene on hexagonal boron nitride (hBN) covered by a 2D multiphase perovskite heterostructure, which was realized using a newly developed two-step ligand exchange method. In this phototransistor, the multiple phases with varying bandgap in 2D perovskite thin films are aligned for the efficient electron-hole pair separation, leading to a high responsivity of ∼10 5 A W -1 at 532 nm. Moreover, the designed phase alignment method aggregates more hydrophobic butylammonium cations close to the upper surface of the 2D perovskite thin film, preventing the permeation of moisture and enhancing the device stability dramatically. In addition, faster photoresponse and smaller 1/f noise observed in the 2D perovskite phototransistors indicate a smaller density of deep hole traps in the 2D perovskite thin film compared with their 3D counterparts. These desirable properties not only improve the performance of the phototransistor, but also provide a new direction for the future enhancement of the efficiency of 2D perovskite photovoltaics.

  3. Quality and efficiency in high dimensional Nearest neighbor search

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2009-01-01

    Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.

  4. Normalization of High Dimensional Genomics Data Where the Distribution of the Altered Variables Is Skewed

    Science.gov (United States)

    Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per

    2011-01-01

    Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher

  5. Characterization of differentially expressed genes using high-dimensional co-expression networks

    DEFF Research Database (Denmark)

    Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.

    2010-01-01

    We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...

  6. Three-dimensional printing of stem cell-laden hydrogels submerged in a hydrophobic high-density fluid

    International Nuclear Information System (INIS)

    Duarte Campos, Daniela F; Blaeser, Andreas; Weber, Michael; Fischer, Horst; Jäkel, Jörg; Neuss, Sabine; Jahnen-Dechent, Wilhelm

    2013-01-01

    Over the last decade, bioprinting technologies have begun providing important tissue engineering strategies for regenerative medicine and organ transplantation. The major drawback of past approaches has been poor or inadequate material-printing device and substrate combinations, as well as the relatively small size of the printed construct. Here, we hypothesise that cell-laden hydrogels can be printed when submerged in perfluorotributylamine (C 12 F 27 N), a hydrophobic high-density fluid, and that these cells placed within three-dimensional constructs remain viable allowing for cell proliferation and production of extracellular matrix. Human mesenchymal stem cells and MG-63 cells were encapsulated into agarose hydrogels, and subsequently printed in high aspect ratio in three dimensional structures that were supported in high density fluorocarbon. Three-dimensional structures with various shapes and sizes were manufactured and remained stable for more than six months. Live/dead and DAPI stainings showed viable cells 24 h after the printing process, as well as after 21 days in culture. Histological and immunohistochemical analyses after 14 and 21 days revealed viable cells with marked matrix production and signs of proliferation. The compressive strength values of the printed gels consequently increased during the two weeks in culture, revealing encouraging results for future applications in regenerative medicine. (paper)

  7. A double-pass interferometer for measurement of dimensional changes

    International Nuclear Information System (INIS)

    Ren, Dongmei; Lawton, K M; Miller, J A

    2008-01-01

    A double-pass interferometer was developed for measuring dimensional changes of materials in a nanoscale absolute interferometric dilatometer. This interferometer realized the double-ended measurement of a sample using a single-detection double-pass interference system. The nearly balanced design, in which the measurement beam and the reference beam have equal optical path lengths except for the path difference caused by the sample itself, makes this interferometer have high stability, which is verified by the measurement of a quasi-zero-length sample. The preliminary experiments and uncertainty analysis show that this interferometer should be able to measure dimensional changes with characteristic uncertainty at the nanometer level

  8. A qualitative numerical study of high dimensional dynamical systems

    Science.gov (United States)

    Albers, David James

    Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional

  9. Inferring biological tasks using Pareto analysis of high-dimensional data.

    Science.gov (United States)

    Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri

    2015-03-01

    We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.

  10. Three-dimensional effects in fracture mechanics

    International Nuclear Information System (INIS)

    Benitez, F.G.

    1991-01-01

    An overall view of the pioneering theories and works, which enlighten the three-dimensional nature of fracture mechanics during the last years is given. the main aim is not an exhaustive reviewing but the displaying of the last developments on this scientific field in a natural way. This work attempts to envisage the limits of disregarding the three-dimensional behaviour in theories, analyses and experiments. Moreover, it tries to draw attention on the scant fervour, although increasing, this three-dimensional nature of fracture has among the scientific community. Finally, a constructive discussion is presented on the use of two-dimensional solutions in the analysis of geometries which bear a three-dimensional configuration. the static two-dimensional solutions and its applications fields are reviewed. also, the static three-dimensional solutions, wherein a comparative analysis with elastoplastic and elastostatic solutions are presented. to end up, the dynamic three-dimensional solutions are compared to the asymptotic two-dimensional ones under the practical applications point of view. (author)

  11. High-speed photography of dynamic photoelastic experiment with a highly accurate blasting machine

    Science.gov (United States)

    Katsuyama, Kunihisa; Ogata, Yuji; Wada, Yuji; Hashizume, K.

    1995-05-01

    A high accurate blasting machine which could control 1 microsecond(s) was developed. At first, explosion of a bridge wire in an electric detonator was observed and next the detonations of caps were observed with a high speed camera. It is well known that a compressed stress wave reflects at the free face, it propagates to the backward as a tensile stress wave, and cracks grow when the tensile stress becomes the dynamic tensile strength. The behavior of these cracks has been discussed through the observation of the dynamic photoelastic high speed photography and the three dimensional dynamic stress analysis.

  12. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    Science.gov (United States)

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  13. High-resolution and high-throughput multichannel Fourier transform spectrometer with two-dimensional interferogram warping compensation

    Science.gov (United States)

    Watanabe, A.; Furukawa, H.

    2018-04-01

    The resolution of multichannel Fourier transform (McFT) spectroscopy is insufficient for many applications despite its extreme advantage of high throughput. We propose an improved configuration to realise both performance using a two-dimensional area sensor. For the spectral resolution, we obtained the interferogram of a larger optical path difference by shifting the area sensor without altering any optical components. The non-linear phase error of the interferometer was successfully corrected using a phase-compensation calculation. Warping compensation was also applied to realise a higher throughput to accumulate the signal between vertical pixels. Our approach significantly improved the resolution and signal-to-noise ratio by factors of 1.7 and 34, respectively. This high-resolution and high-sensitivity McFT spectrometer will be useful for detecting weak light signals such as those in non-invasive diagnosis.

  14. A journey from nuclear criticality methods to high energy density radflow experiments

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, Todd James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-30

    Los Alamos National Laboratory is a nuclear weapons laboratory supporting our nation's defense. In support of this mission is a high energy-density physics program in which we design and execute experiments to study radiationhydrodynamics phenomena and improve the predictive capability of our largescale multi-physics software codes on our big-iron computers. The Radflow project’s main experimental effort now is to understand why we haven't been able to predict opacities on Sandia National Laboratory's Z-machine. We are modeling an increasing fraction of the Z-machine's dynamic hohlraum to find multi-physics explanations for the experimental results. Further, we are building an entirely different opacity platform on Lawrence Livermore National Laboratory's National Ignition Facility (NIF), which is set to get results early 2017. Will the results match our predictions, match the Z-machine, or give us something entirely different? The new platform brings new challenges such as designing hohlraums and spectrometers. The speaker will recount his history, starting with one-dimensional Monte Carlo nuclear criticality methods in graduate school, radiative transfer methods research and software development for his first 16 years at LANL, and, now, radflow technology and experiments. Who knew that the real world was more than just radiation transport? Experiments aren't easy, but they sure are fun.

  15. High-speed three-dimensional plasma temperature determination of axially symmetric free-burning arcs

    International Nuclear Information System (INIS)

    Bachmann, B; Ekkert, K; Bachmann, J-P; Marques, J-L; Schein, J; Kozakov, R; Gött, G; Schöpp, H; Uhrlandt, D

    2013-01-01

    In this paper we introduce an experimental technique that allows for high-speed, three-dimensional determination of electron density and temperature in axially symmetric free-burning arcs. Optical filters with narrow spectral bands of 487.5–488.5 nm and 689–699 nm are utilized to gain two-dimensional spectral information of a free-burning argon tungsten inert gas arc. A setup of mirrors allows one to image identical arc sections of the two spectral bands onto a single camera chip. Two-different Abel inversion algorithms have been developed to reconstruct the original radial distribution of emission coefficients detected with each spectral window and to confirm the results. With the assumption of local thermodynamic equilibrium we calculate emission coefficients as a function of temperature by application of the Saha equation, the ideal gas law, the quasineutral gas condition and the NIST compilation of spectral lines. Ratios of calculated emission coefficients are compared with measured ones yielding local plasma temperatures. In the case of axial symmetry the three-dimensional plasma temperature distributions have been determined at dc currents of 100, 125, 150 and 200 A yielding temperatures up to 20000 K in the hot cathode region. These measurements have been validated by four different techniques utilizing a high-resolution spectrometer at different positions in the plasma. Plasma temperatures show good agreement throughout the different methods. Additionally spatially resolved transient plasma temperatures have been measured of a dc pulsed process employing a high-speed frame rate of 33000 frames per second showing the modulation of the arc isothermals with time and providing information about the sensitivity of the experimental approach. (paper)

  16. Usefulness and capability of three-dimensional, full high-definition movies for surgical education.

    Science.gov (United States)

    Takano, M; Kasahara, K; Sugahara, K; Watanabe, A; Yoshida, S; Shibahara, T

    2017-12-01

    Because of changing surgical procedures in the fields of oral and maxillofacial surgery, new methods for surgical education are needed and could include recent advances in digital technology. Many doctors have attempted to use digital technology as educational tools for surgical training, and movies have played an important role in these attempts. We have been using a 3D full high-definition (full-HD) camcorder to record movies of intra-oral surgeries. The subjects were medical students and doctors receiving surgical training who did not have actual surgical experience ( n  = 67). Participants watched an 8-min, 2D movie of orthognathic surgery and subsequently watched the 3D version. After watching the 3D movie, participants were asked to complete a questionnaire. A lot of participants (84%) felt a 3D movie excellent or good and answered that the advantages of a 3D movie were their appearance of solidity or realism. Almost all participants (99%) answered that 3D movies were quite useful or useful for medical practice. Three-dimensional full-HD movies have the potential to improve the quality of medical education and clinical practice in oral and maxillofacial surgery.

  17. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.

    Science.gov (United States)

    Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros

    2018-05-01

    We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.

  18. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2017-04-25

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  19. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    International Nuclear Information System (INIS)

    Wang, Yang; Bao, Wan-Su; Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei

    2017-01-01

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  20. Metallic few-layered VS2 ultrathin nanosheets: high two-dimensional conductivity for in-plane supercapacitors.

    Science.gov (United States)

    Feng, Jun; Sun, Xu; Wu, Changzheng; Peng, Lele; Lin, Chenwen; Hu, Shuanglin; Yang, Jinlong; Xie, Yi

    2011-11-09

    With the rapid development of portable electronics, such as e-paper and other flexible devices, practical power sources with ultrathin geometries become an important prerequisite, in which supercapacitors with in-plane configurations are recently emerging as a favorable and competitive candidate. As is known, electrode materials with two-dimensional (2D) permeable channels, high-conductivity structural scaffolds, and high specific surface areas are the indispensible requirements for the development of in-plane supercapacitors with superior performance, while it is difficult for the presently available inorganic materials to make the best in all aspects. In this sense, vanadium disulfide (VS(2)) presents an ideal material platform due to its synergic properties of metallic nature and exfoliative characteristic brought by the conducting S-V-S layers stacked up by weak van der Waals interlayer interactions, offering great potential as high-performance in-plane supercapacitor electrodes. Herein, we developed a unique ammonia-assisted strategy to exfoliate bulk VS(2) flakes into ultrathin VS(2) nanosheets stacked with less than five S-V-S single layers, representing a brand new two-dimensional material having metallic behavior aside from graphene. Moreover, highly conductive VS(2) thin films were successfully assembled for constructing the electrodes of in-plane supercapacitors. As is expected, a specific capacitance of 4760 μF/cm(2) was realized here in a 150 nm in-plane configuration, of which no obvious degradation was observed even after 1000 charge/discharge cycles, offering as a new in-plane supercapacitor with high performance based on quasi-two-dimensional materials.

  1. Three Dimensional Dirac Semimetals

    Science.gov (United States)

    Zaheer, Saad

    2014-03-01

    Dirac points on the Fermi surface of two dimensional graphene are responsible for its unique electronic behavior. One can ask whether any three dimensional materials support similar pseudorelativistic physics in their bulk electronic spectra. This possibility has been investigated theoretically and is now supported by two successful experimental demonstrations reported during the last year. In this talk, I will summarize the various ways in which Dirac semimetals can be realized in three dimensions with primary focus on a specific theory developed on the basis of representations of crystal spacegroups. A three dimensional Dirac (Weyl) semimetal can appear in the presence (absence) of inversion symmetry by tuning parameters to the phase boundary separating a bulk insulating and a topological insulating phase. More generally, we find that specific rules governing crystal symmetry representations of electrons with spin lead to robust Dirac points at high symmetry points in the Brillouin zone. Combining these rules with microscopic considerations identifies six candidate Dirac semimetals. Another method towards engineering Dirac semimetals involves combining crystal symmetry and band inversion. Several candidate materials have been proposed utilizing this mechanism and one of the candidates has been successfully demonstrated as a Dirac semimetal in two independent experiments. Work carried out in collaboration with: Julia A. Steinberg, Steve M. Young, J.C.Y. Teo, C.L. Kane, E.J. Mele and Andrew M. Rappe.

  2. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    Science.gov (United States)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  3. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  4. Hierarchical one-dimensional ammonium nickel phosphate microrods for high-performance pseudocapacitors

    CSIR Research Space (South Africa)

    Raju, K

    2015-12-01

    Full Text Available :17629 | DOI: 10.1038/srep17629 www.nature.com/scientificreports Hierarchical One-Dimensional Ammonium Nickel Phosphate Microrods for High-Performance Pseudocapacitors Kumar Raju1 & Kenneth I. Ozoemena1,2 High-performance electrochemical capacitors... OPEN w w w . n a t u r e . c o m / s c i e n t i f i c r e p o r t s / 2S C I E N T I F I C REPORTS | 5:17629 | DOI: 10.1038/srep17629 Hierarchical 1-D and 2-D materials maximize the supercapacitive properties due to their unique ability to permit ion...

  5. Gyrokinetic Vlasov code including full three-dimensional geometry of experiments

    International Nuclear Information System (INIS)

    Nunami, Masanori; Watanabe, Tomohiko; Sugama, Hideo

    2010-03-01

    A new gyrokinetic Vlasov simulation code, GKV-X, is developed for investigating the turbulent transport in magnetic confinement devices with non-axisymmetric configurations. Effects of the magnetic surface shapes in a three-dimensional equilibrium obtained from the VMEC code are accurately incorporated. Linear simulations of the ion temperature gradient instabilities and the zonal flows in the Large Helical Device (LHD) configuration are carried out by the GKV-X code for a benchmark test against the GKV code. The frequency, the growth rate, and the mode structure of the ion temperature gradient instability are influenced by the VMEC geometrical data such as the metric tensor components of the Boozer coordinates for high poloidal wave numbers, while the difference between the zonal flow responses obtained by the GKV and GKV-X codes is found to be small in the core LHD region. (author)

  6. Three-Dimensional Numerical Analysis of an Operating Helical Rotor Pump at High Speeds and High Pressures including Cavitation

    Directory of Open Access Journals (Sweden)

    Zhou Yang

    2017-01-01

    Full Text Available High pressures, high speeds, low noise and miniaturization is the direction of development in hydraulic pump. According to the development trend, an operating helical rotor pump (HRP at high speeds and high pressures has been designed and produced, which rotational speed can reach 12000r/min and outlet pressure is as high as 25MPa. Three-dimensional simulation with and without cavitation inside the HRP is completed by the means of the computational fluid dynamics (CFD in this paper, which contributes to understand the complex fluid flow inside it. Moreover, the influences of the rotational speeds of the HRP with and without cavitation has been simulated at 25MPa.

  7. High-precision two-dimensional atom localization via quantum interference in a tripod-type system

    International Nuclear Information System (INIS)

    Wang, Zhiping; Yu, Benli

    2014-01-01

    A scheme is proposed for high-precision two-dimensional atom localization in a four-level tripod-type atomic system via measurement of the excited state population. It is found that because of the position-dependent atom–field interaction, the precision of 2D atom localization can be significantly improved by appropriately adjusting the system parameters. Our scheme may be helpful in laser cooling or atom nanolithography via high-precision and high-resolution atom localization. (letter)

  8. Low-resistance gateless high electron mobility transistors using three-dimensional inverted pyramidal AlGaN/GaN surfaces

    International Nuclear Information System (INIS)

    So, Hongyun; Senesky, Debbie G.

    2016-01-01

    In this letter, three-dimensional gateless AlGaN/GaN high electron mobility transistors (HEMTs) were demonstrated with 54% reduction in electrical resistance and 73% increase in surface area compared with conventional gateless HEMTs on planar substrates. Inverted pyramidal AlGaN/GaN surfaces were microfabricated using potassium hydroxide etched silicon with exposed (111) surfaces and metal-organic chemical vapor deposition of coherent AlGaN/GaN thin films. In addition, electrical characterization of the devices showed that a combination of series and parallel connections of the highly conductive two-dimensional electron gas along the pyramidal geometry resulted in a significant reduction in electrical resistance at both room and high temperatures (up to 300 °C). This three-dimensional HEMT architecture can be leveraged to realize low-power and reliable power electronics, as well as harsh environment sensors with increased surface area

  9. Low-resistance gateless high electron mobility transistors using three-dimensional inverted pyramidal AlGaN/GaN surfaces

    Energy Technology Data Exchange (ETDEWEB)

    So, Hongyun, E-mail: hyso@stanford.edu [Department of Aeronautics and Astronautics, Stanford University, Stanford, California 94305 (United States); Senesky, Debbie G. [Department of Aeronautics and Astronautics, Stanford University, Stanford, California 94305 (United States); Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States)

    2016-01-04

    In this letter, three-dimensional gateless AlGaN/GaN high electron mobility transistors (HEMTs) were demonstrated with 54% reduction in electrical resistance and 73% increase in surface area compared with conventional gateless HEMTs on planar substrates. Inverted pyramidal AlGaN/GaN surfaces were microfabricated using potassium hydroxide etched silicon with exposed (111) surfaces and metal-organic chemical vapor deposition of coherent AlGaN/GaN thin films. In addition, electrical characterization of the devices showed that a combination of series and parallel connections of the highly conductive two-dimensional electron gas along the pyramidal geometry resulted in a significant reduction in electrical resistance at both room and high temperatures (up to 300 °C). This three-dimensional HEMT architecture can be leveraged to realize low-power and reliable power electronics, as well as harsh environment sensors with increased surface area.

  10. Reconstruction of high-dimensional states entangled in orbital angular momentum using mutually unbiased measurements

    CSIR Research Space (South Africa)

    Giovannini, D

    2013-06-01

    Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...

  11. High-efficiency one-dimensional atom localization via two parallel standing-wave fields

    International Nuclear Information System (INIS)

    Wang, Zhiping; Wu, Xuqiang; Lu, Liang; Yu, Benli

    2014-01-01

    We present a new scheme of high-efficiency one-dimensional (1D) atom localization via measurement of upper state population or the probe absorption in a four-level N-type atomic system. By applying two classical standing-wave fields, the localization peak position and number, as well as the conditional position probability, can be easily controlled by the system parameters, and the sub-half-wavelength atom localization is also observed. More importantly, there is 100% detecting probability of the atom in the subwavelength domain when the corresponding conditions are satisfied. The proposed scheme may open up a promising way to achieve high-precision and high-efficiency 1D atom localization. (paper)

  12. Generalized reduced rank latent factor regression for high dimensional tensor fields, and neuroimaging-genetic applications.

    Science.gov (United States)

    Tao, Chenyang; Nichols, Thomas E; Hua, Xue; Ching, Christopher R K; Rolls, Edmund T; Thompson, Paul M; Feng, Jianfeng

    2017-01-01

    We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. Copyright © 2016. Published by Elsevier Inc.

  13. High-Level Heteroatom Doped Two-Dimensional Carbon Architectures for Highly Efficient Lithium-Ion Storage

    Directory of Open Access Journals (Sweden)

    Zhijie Wang

    2018-04-01

    Full Text Available In this work, high-level heteroatom doped two-dimensional hierarchical carbon architectures (H-2D-HCA are developed for highly efficient Li-ion storage applications. The achieved H-2D-HCA possesses a hierarchical 2D morphology consisting of tiny carbon nanosheets vertically grown on carbon nanoplates and containing a hierarchical porosity with multiscale pore size. More importantly, the H-2D-HCA shows abundant heteroatom functionality, with sulfur (S doping of 0.9% and nitrogen (N doping of as high as 15.5%, in which the electrochemically active N accounts for 84% of total N heteroatoms. In addition, the H-2D-HCA also has an expanded interlayer distance of 0.368 nm. When used as lithium-ion battery anodes, it shows excellent Li-ion storage performance. Even at a high current density of 5 A g−1, it still delivers a high discharge capacity of 329 mA h g−1 after 1,000 cycles. First principle calculations verifies that such unique microstructure characteristics and high-level heteroatom doping nature can enhance Li adsorption stability, electronic conductivity and Li diffusion mobility of carbon nanomaterials. Therefore, the H-2D-HCA could be promising candidates for next-generation LIB anodes.

  14. Reduced, three-dimensional, nonlinear equations for high-β plasmas including toroidal effects

    International Nuclear Information System (INIS)

    Schmalz, R.

    1980-11-01

    The resistive MHD equations for toroidal plasma configurations are reduced by expanding to the second order in epsilon, the inverse aspect ratio, allowing for high β = μsub(o)p/B 2 of order epsilon. The result is a closed system of nonlinear, three-dimensional equations where the fast magnetohydrodynamic time scale is eliminated. In particular, the equation for the toroidal velocity remains decoupled. (orig.)

  15. Dimensionality analysis of multiparticle production at high energies

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1989-01-01

    An algorithm of analysis of multiparticle final states is offered. By the Renyi dimensionalities, which were calculated according to experimental data, though it were hadron distribution over the rapidity intervals or particle distribution in an N-dimensional momentum space, we can judge about the degree of correlation of particles, separate the momentum space projections and areas where the probability measure singularities are observed. The method is tested in a series of calculations with samples of fractal object points and with samples obtained by means of different generators of pseudo- and quasi-random numbers. 27 refs.; 11 figs

  16. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and “hidden” dimensions

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D.; Ridge, Clark; Shaka, A. J.

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to “reduced-dimensionality” strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the Filter Diagonalization Method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths. PMID:18926747

  17. Self-organized defect strings in two-dimensional crystals.

    Science.gov (United States)

    Lechner, Wolfgang; Polster, David; Maret, Georg; Keim, Peter; Dellago, Christoph

    2013-12-01

    Using experiments with single-particle resolution and computer simulations we study the collective behavior of multiple vacancies injected into two-dimensional crystals. We find that the defects assemble into linear strings, terminated by dislocations with antiparallel Burgers vectors. We show that these defect strings propagate through the crystal in a succession of rapid one-dimensional gliding and rare rotations. While the rotation rate decreases exponentially with the number of defects in the string, the diffusion constant is constant for large strings. By monitoring the separation of the dislocations at the end points, we measure their effective interactions with high precision beyond their spontaneous formation and annihilation, and we explain the double-well form of the dislocation interaction in terms of continuum elasticity theory.

  18. Joint High-Dimensional Bayesian Variable and Covariance Selection with an Application to eQTL Analysis

    KAUST Repository

    Bhadra, Anindya

    2013-04-22

    We describe a Bayesian technique to (a) perform a sparse joint selection of significant predictor variables and significant inverse covariance matrix elements of the response variables in a high-dimensional linear Gaussian sparse seemingly unrelated regression (SSUR) setting and (b) perform an association analysis between the high-dimensional sets of predictors and responses in such a setting. To search the high-dimensional model space, where both the number of predictors and the number of possibly correlated responses can be larger than the sample size, we demonstrate that a marginalization-based collapsed Gibbs sampler, in combination with spike and slab type of priors, offers a computationally feasible and efficient solution. As an example, we apply our method to an expression quantitative trait loci (eQTL) analysis on publicly available single nucleotide polymorphism (SNP) and gene expression data for humans where the primary interest lies in finding the significant associations between the sets of SNPs and possibly correlated genetic transcripts. Our method also allows for inference on the sparse interaction network of the transcripts (response variables) after accounting for the effect of the SNPs (predictor variables). We exploit properties of Gaussian graphical models to make statements concerning conditional independence of the responses. Our method compares favorably to existing Bayesian approaches developed for this purpose. © 2013, The International Biometric Society.

  19. Three-dimensional interconnected porous graphitic carbon derived from rice straw for high performance supercapacitors

    Science.gov (United States)

    Jin, Hong; Hu, Jingpeng; Wu, Shichao; Wang, Xiaolan; Zhang, Hui; Xu, Hui; Lian, Kun

    2018-04-01

    Three-dimensional interconnected porous graphitic carbon materials are synthesized via a combination of graphitization and activation process with rice straw as the carbon source. The physicochemical properties of the three-dimensional interconnected porous graphitic carbon materials are characterized by Nitrogen adsorption/desorption, Fourier-transform infrared spectroscopy, X-ray diffraction, Raman spectroscopy, Scanning electron microscopy and Transmission electron microscopy. The results demonstrate that the as-prepared carbon is a high surface area carbon material (a specific surface area of 3333 m2 g-1 with abundant mesoporous and microporous structures). And it exhibits superb performance in symmetric double layer capacitors with a high specific capacitance of 400 F g-1 at a current density of 0.1 A g-1, good rate performance with 312 F g-1 under a current density of 5 A g-1 and favorable cycle stability with 6.4% loss after 10000 cycles at a current density of 5 A g-1 in the aqueous electrolyte of 6M KOH. Thus, rice straw is a promising carbon source for fabricating inexpensive, sustainable and high performance supercapacitors' electrode materials.

  20. Offline coupling of high-speed counter-current chromatography and gas chromatography/mass spectrometry generates a two-dimensional plot of toxaphene components.

    Science.gov (United States)

    Kapp, Thomas; Vetter, Walter

    2009-11-20

    High-speed counter-current chromatography (HSCCC), a separation technique based solely on the partitioning of solutes between two immiscible liquid phases, was applied for the fractionation of technical toxaphene, an organochlorine pesticide which consists of a complex mixture of structurally closely related compounds. A solvent system (n-hexane/methanol/water 34:24:1, v/v/v) was developed which allowed to separate compounds of technical toxaphene (CTTs) with excellent retention of the stationary phase (S(f) = 88%). Subsequent analysis of all HSCCC fractions by gas chromatography coupled to electron-capture negative ion mass spectrometry (GC/ECNI-MS) provided a wealth of information regarding separation characteristics of HSCCC and the composition of technical toxaphene. The visualization of the large amount of data obtained from the offline two-dimensional HSCCC-GC/ECNI-MS experiment was facilitated by the creation of a two-dimensional (2D) contour plot. The contour plot not only provided an excellent overview of the HSCCC separation progress, it also illustrated the differences in selectivity between HSCCC and GC. The results of this proof-of-concept study showed that the 2D chromatographic approach involving HSCCC facilitated the separation of CTTs that coelute in unidimensional GC. Furthermore, the creation of 2D contour plots may provide a useful means of enhancing data visualization for other offline two-dimensional separations.

  1. Simulations of dimensionally reduced effective theories of high temperature QCD

    CERN Document Server

    Hietanen, Ari

    Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by perf...

  2. Inference for High-dimensional Differential Correlation Matrices.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  3. Highly Efficient Broadband Yellow Phosphor Based on Zero-Dimensional Tin Mixed-Halide Perovskite.

    Science.gov (United States)

    Zhou, Chenkun; Tian, Yu; Yuan, Zhao; Lin, Haoran; Chen, Banghao; Clark, Ronald; Dilbeck, Tristan; Zhou, Yan; Hurley, Joseph; Neu, Jennifer; Besara, Tiglet; Siegrist, Theo; Djurovich, Peter; Ma, Biwu

    2017-12-27

    Organic-inorganic hybrid metal halide perovskites have emerged as a highly promising class of light emitters, which can be used as phosphors for optically pumped white light-emitting diodes (WLEDs). By controlling the structural dimensionality, metal halide perovskites can exhibit tunable narrow and broadband emissions from the free-exciton and self-trapped excited states, respectively. Here, we report a highly efficient broadband yellow light emitter based on zero-dimensional tin mixed-halide perovskite (C 4 N 2 H 14 Br) 4 SnBr x I 6-x (x = 3). This rare-earth-free ionically bonded crystalline material possesses a perfect host-dopant structure, in which the light-emitting metal halide species (SnBr x I 6-x 4- , x = 3) are completely isolated from each other and embedded in the wide band gap organic matrix composed of C 4 N 2 H 14 Br - . The strongly Stokes-shifted broadband yellow emission that peaked at 582 nm from this phosphor, which is a result of excited state structural reorganization, has an extremely large full width at half-maximum of 126 nm and a high photoluminescence quantum efficiency of ∼85% at room temperature. UV-pumped WLEDs fabricated using this yellow emitter together with a commercial europium-doped barium magnesium aluminate blue phosphor (BaMgAl 10 O 17 :Eu 2+ ) can exhibit high color rendering indexes of up to 85.

  4. Water experiment of high-speed, free-surface, plane jet along concave wall

    International Nuclear Information System (INIS)

    Nakamura, Hideo; Ida, Mizuho; Kato, Yoshio; Maekawa, Hiroshi; Itoh, Kazuhiro; Kukita, Yutaka

    1997-01-01

    In the International Fusion Materials Irradiation Facility (IFMIF), an intense 14 MeV neutron beam will be generated in the high-speed liquid lithium (Li) plane jet target flowing along concave wall in vacuum. As part of the conceptual design activity (CDA) of the IFMIF, the stability of the plane liquid jet flow was studied experimentally with water in a well-defined channel geometry for non-heating condition. A two-dimensional double-reducer nozzle being newly proposed for the IFMIF target successfully provided a high-speed (≤ 17 m/s) stable water jet with uniform velocity distribution at the nozzle exit without flow separation in the nozzle. The free surface of the jet was covered by two-dimensional and/or three-dimensional waves, the size of which did not change much over the tested jet length of ∼130 mm. The jet velocity profile changed around the nozzle exit from uniform to that of free-vortex flow where the product of the radius of stream line and local velocity is constant in the jet thickness. The jet thickness increased immediately after exiting the nozzle because of the velocity profile change. The predicted jet thickness by a modified one-dimensional momentum model agreed with the data well. (author)

  5. Three-dimensional carbon nanotube networks with a supported nickel oxide nanonet for high-performance supercapacitors.

    Science.gov (United States)

    Wu, Mao-Sung; Zheng, Yo-Ru; Lin, Guan-Wei

    2014-08-04

    A three-dimensional porous carbon nanotube film with a supported NiO nanonet was prepared by simple electrophoretic deposition and hydrothermal synthesis, which could deliver a high specific capacitance of 1511 F g(-1) at a high discharge current of 50 A g(-1) due to the significantly improved transport of the electrolyte and electrons.

  6. Unsteady three-dimensional behavior of natural convection in horizontal annulus

    International Nuclear Information System (INIS)

    Ohya, Toshizo; Miki, Yasutomi; Morita, Kouji; Fukuda, Kenji; Hasegawa, Shu

    1988-01-01

    An numerical analysis has been performed on unsteady three-dimensional natural convection in a concentric horizontal annulus filled with air. The explicit leap-frog scheme is used for integrating three-dimensional time-dependent equations and the fast Fourier transform (FFT) for solving the Poisson equations for pressure. An oscillatory flow is found to occur at high Rayleigh numbers, which agree qualitatively with the experimental observation made by Bishop et al. An experiment is also conducted to measure temperature fluctuations; a comparison between periods of fluctuations obtained numerically and experimentally shows a good agreement. Numerical calculations yield various statistical parameters of turbulence at higher Rayleigh numbers, which wait experimental verificaions, however. (author)

  7. Three-dimensional iron sulfide-carbon interlocked graphene composites for high-performance sodium-ion storage

    DEFF Research Database (Denmark)

    Huang, Wei; Sun, Hongyu; Shangguan, Huihui

    2018-01-01

    Three-dimensional (3D) carbon-wrapped iron sulfide interlocked graphene (Fe7S8@C-G) composites for high-performance sodium-ion storage are designed and produced through electrostatic interactions and subsequent sulfurization. The iron-based metal–organic frameworks (MOFs, MIL-88-Fe) interact with...

  8. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    Science.gov (United States)

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  9. Dimensional consistency achieved in high-performance synchronizing hubs

    International Nuclear Information System (INIS)

    Garcia, P.; Campos, M.; Torralba, M.

    2013-01-01

    The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via ( 2 P2S ) has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and process. (Author) 21 refs.

  10. Clarifying the Conceptualization, Dimensionality, and Structure of Emotion: Response to Barrett and Colleagues.

    Science.gov (United States)

    Cowen, Alan S; Keltner, Dacher

    2018-04-01

    We present a mathematically based framework distinguishing the dimensionality, structure, and conceptualization of emotion-related responses. Our recent findings indicate that reported emotional experience is high-dimensional, involves gradients between categories traditionally thought of as discrete (e.g., 'fear', 'disgust'), and cannot be reduced to widely used domain-general scales (valence, arousal, etc.). In light of our conceptual framework and findings, we address potential methodological and conceptual confusions in Barrett and colleagues' commentary on our work. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. SAMNet: a network-based approach to integrate multi-dimensional high throughput datasets.

    Science.gov (United States)

    Gosline, Sara J C; Spencer, Sarah J; Ursu, Oana; Fraenkel, Ernest

    2012-11-01

    The rapid development of high throughput biotechnologies has led to an onslaught of data describing genetic perturbations and changes in mRNA and protein levels in the cell. Because each assay provides a one-dimensional snapshot of active signaling pathways, it has become desirable to perform multiple assays (e.g. mRNA expression and phospho-proteomics) to measure a single condition. However, as experiments expand to accommodate various cellular conditions, proper analysis and interpretation of these data have become more challenging. Here we introduce a novel approach called SAMNet, for Simultaneous Analysis of Multiple Networks, that is able to interpret diverse assays over multiple perturbations. The algorithm uses a constrained optimization approach to integrate mRNA expression data with upstream genes, selecting edges in the protein-protein interaction network that best explain the changes across all perturbations. The result is a putative set of protein interactions that succinctly summarizes the results from all experiments, highlighting the network elements unique to each perturbation. We evaluated SAMNet in both yeast and human datasets. The yeast dataset measured the cellular response to seven different transition metals, and the human dataset measured cellular changes in four different lung cancer models of Epithelial-Mesenchymal Transition (EMT), a crucial process in tumor metastasis. SAMNet was able to identify canonical yeast metal-processing genes unique to each commodity in the yeast dataset, as well as human genes such as β-catenin and TCF7L2/TCF4 that are required for EMT signaling but escaped detection in the mRNA and phospho-proteomic data. Moreover, SAMNet also highlighted drugs likely to modulate EMT, identifying a series of less canonical genes known to be affected by the BCR-ABL inhibitor imatinib (Gleevec), suggesting a possible influence of this drug on EMT.

  12. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    Science.gov (United States)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  13. ONE-DIMENSIONAL AND TWO-DIMENSIONAL LEADERSHIP STYLES

    OpenAIRE

    Nikola Stefanović

    2007-01-01

    In order to motivate their group members to perform certain tasks, leaders use different leadership styles. These styles are based on leaders' backgrounds, knowledge, values, experiences, and expectations. The one-dimensional styles, used by many world leaders, are autocratic and democratic styles. These styles lie on the two opposite sides of the leadership spectrum. In order to precisely define the leadership styles on the spectrum between the autocratic leadership style and the democratic ...

  14. Measurement model and calibration experiment of over-constrained parallel six-dimensional force sensor based on stiffness characteristics analysis

    International Nuclear Information System (INIS)

    Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua

    2017-01-01

    An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)

  15. One-dimensional numerical simulations of the low-frequency electric fields in the CRIT 1 and CRIT 2 rocket experiments

    International Nuclear Information System (INIS)

    Bolin, O.; Brenning, N.

    1992-04-01

    One-dimensional numerical particle simulations have been performed of the ionospheric barium injection experiments CRIT 1 and CRIT 2, using a realistic model for the shape and the time development of the injected neutral cloud. The electrodynamic response of the ionosphere to these injections is modelled by magnetic-field-aligned currents, using the concept of Alfven conductivity. The results shows very good agreement with the CRIT data, especially concerning the low-frequency oscillations that were seen after the initial phase of the injections. The shapes, amplitudes, phases, and decay times of the electric fields are all very close to the values measured in the CRIT experiments. (au)

  16. A journey from nuclear criticality methods to high energy density radflow experiments

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, Todd James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-08

    Los Alamos National Laboratory is a nuclear weapons laboratory supporting our nation's defense. In support of this mission is a high energy-density physics program in which we design and execute experiments to study radiationhydrodynamics phenomena and improve the predictive capability of our largescale multi-physics software codes on our big-iron computers. The Radflow project’s main experimental effort now is to understand why we haven't been able to predict opacities on Sandia National Laboratory's Z-machine. We are modeling an increasing fraction of the Z-machine's dynamic hohlraum to find multi-physics explanations for the experimental results. Further, we are building an entirely different opacity platform on Lawrence Livermore National Laboratory's National Ignition Facility (NIF), which is set to get results early 2017. Will the results match our predictions, match the Z-machine, or give us something entirely different? The new platform brings new challenges such as designing hohlraums and spectrometers. The speaker will recount his history, starting with one-dimensional Monte Carlo nuclear criticality methods in graduate school, radiative transfer methods research and software development for his first 16 years at LANL, and, now, radflow technology and experiments. Who knew that the real world was more than just radiation transport? Experiments aren't easy and they are as saturated with politics as a presidential election, but they sure are fun.

  17. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  18. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    Science.gov (United States)

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  19. TESTING HIGH-DIMENSIONAL COVARIANCE MATRICES, WITH APPLICATION TO DETECTING SCHIZOPHRENIA RISK GENES.

    Science.gov (United States)

    Zhu, Lingxue; Lei, Jing; Devlin, Bernie; Roeder, Kathryn

    2017-09-01

    Scientists routinely compare gene expression levels in cases versus controls in part to determine genes associated with a disease. Similarly, detecting case-control differences in co-expression among genes can be critical to understanding complex human diseases; however statistical methods have been limited by the high dimensional nature of this problem. In this paper, we construct a sparse-Leading-Eigenvalue-Driven (sLED) test for comparing two high-dimensional covariance matrices. By focusing on the spectrum of the differential matrix, sLED provides a novel perspective that accommodates what we assume to be common, namely sparse and weak signals in gene expression data, and it is closely related with Sparse Principal Component Analysis. We prove that sLED achieves full power asymptotically under mild assumptions, and simulation studies verify that it outperforms other existing procedures under many biologically plausible scenarios. Applying sLED to the largest gene-expression dataset obtained from post-mortem brain tissue from Schizophrenia patients and controls, we provide a novel list of genes implicated in Schizophrenia and reveal intriguing patterns in gene co-expression change for Schizophrenia subjects. We also illustrate that sLED can be generalized to compare other gene-gene "relationship" matrices that are of practical interest, such as the weighted adjacency matrices.

  20. High-accuracy optical extensometer based on coordinate transform in two-dimensional digital image correlation

    Science.gov (United States)

    Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan

    2018-01-01

    In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.

  1. Statistical mechanics of complex neural systems and high dimensional data

    International Nuclear Information System (INIS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-01-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. (paper)

  2. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    Science.gov (United States)

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  3. Study on two-dimensional induced signal readout of MRPC

    International Nuclear Information System (INIS)

    Wu Yucheng; Yue Qian; Li Yuanjing; Ye Jin; Cheng Jianping; Wang Yi; Li Jin

    2012-01-01

    A kind of two-dimensional readout electrode structure for the induced signal readout of MRPC has been studied in both simulation and experiments. Several MRPC prototypes are produced and a series of test experiments have been done to compare with the result of simulation, in order to verify the simulation model. The experiment results are in good agreement with those of simulation. This method will be used to design the two-dimensional signal readout mode of MRPC in the future work.

  4. Using Localised Quadratic Functions on an Irregular Grid for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose a method for pricing high-dimensional American options on an irregular grid; the method involves using quadratic functions to approximate the local effect of the Black-Scholes operator.Once such an approximation is known, one can solve the pricing problem by time stepping in an explicit

  5. A Dissimilarity Measure for Clustering High- and Infinite Dimensional Data that Satisfies the Triangle Inequality

    Science.gov (United States)

    Socolovsky, Eduardo A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The cosine or correlation measures of similarity used to cluster high dimensional data are interpreted as projections, and the orthogonal components are used to define a complementary dissimilarity measure to form a similarity-dissimilarity measure pair. Using a geometrical approach, a number of properties of this pair is established. This approach is also extended to general inner-product spaces of any dimension. These properties include the triangle inequality for the defined dissimilarity measure, error estimates for the triangle inequality and bounds on both measures that can be obtained with a few floating-point operations from previously computed values of the measures. The bounds and error estimates for the similarity and dissimilarity measures can be used to reduce the computational complexity of clustering algorithms and enhance their scalability, and the triangle inequality allows the design of clustering algorithms for high dimensional distributed data.

  6. Three-dimensional coupled kinetics/thermal- hydraulic benchmark TRIGA experiments

    International Nuclear Information System (INIS)

    Feltus, Madeline Anne; Miller, William Scott

    2000-01-01

    This research project provides separate effects tests in order to benchmark neutron kinetics models coupled with thermal-hydraulic (T/H) models used in best-estimate codes such as the Nuclear Regulatory Commission's (NRC) RELAP and TRAC code series and industrial codes such as RETRAN. Before this research project was initiated, no adequate experimental data existed for reactivity initiated transients that could be used to assess coupled three-dimensional (3D) kinetics and 3D T/H codes which have been, or are being developed around the world. Using various Test Reactor Isotope General Atomic (TRIGA) reactor core configurations at the Penn State Breazeale Reactor (PSBR), it is possible to determine the level of neutronics modeling required to describe kinetics and T/H feedback interactions. This research demonstrates that the small compact PSBR TRIGA core does not necessarily behave as a point kinetics reactor, but that this TRIGA can provide actual test results for 3D kinetics code benchmark efforts. This research focused on developing in-reactor tests that exhibited 3D neutronics effects coupled with 3D T/H feedback. A variety of pulses were used to evaluate the level of kinetics modeling needed for prompt temperature feedback in the fuel. Ramps and square waves were used to evaluate the detail of modeling needed for the delayed T/H feedback of the coolant. A stepped ramp was performed to evaluate and verify the derived thermal constants for the specific PSBR TRIGA core loading pattern. As part of the analytical benchmark research, the STAR 3D kinetics code (, STAR: Space and time analysis of reactors, Version 5, Level 3, Users Guide, Yankee Atomic Electric Company, YEAC 1758, Bolton, MA) was used to model the transient experiments. The STAR models were coupled with the one-dimensional (1D) WIGL and LRA and 3D COBRA (, COBRA IIIC: A digital computer program for steady-state and transient thermal-hydraulic analysis of rod bundle nuclear fuel elements, Battelle

  7. GAMLSS for high-dimensional data – a flexible approach based on boosting

    OpenAIRE

    Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas; Schmid, Matthias

    2010-01-01

    Generalized additive models for location, scale and shape (GAMLSS) are a popular semi-parametric modelling approach that, in contrast to conventional GAMs, regress not only the expected mean but every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSS are infeasible for high-dimensional data setups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algo...

  8. Role and status of scaled experiments in the development of fluoride-salt-cooled, high-temperature reactors - 15185

    International Nuclear Information System (INIS)

    Zweibaum, N.; Huddar, L.; Laufer, M.R.; Peterson, P.F.; Hughes, J.T.; Blandford, E.D.; Scarlat, R.O.

    2015-01-01

    Development of fluoride-salt-cooled, high-temperature reactor (FHR) technology requires a better understanding of key hydrodynamic and heat transfer phenomena associated with this novel class of reactors. The use of simulant fluids that can match the most important non dimensional numbers between scaled experiments and prototypical FHR systems enables integral effects tests (IETs) to be performed at reduced cost and difficulty for FHR code validation. The University of California at Berkeley (UCB) and the University of New Mexico (UNM) have built a number of IETs and separate effects tests to investigate pebble-bed FHR (PB-FHR) phenomenology using water or simulant oils such as Dowtherm A. PB-FHR pebble motion and porous media flow dynamics have been investigated with UCB's pebble recirculation experiments using water and plastic spheres. Transient flow of high-Prandtl-number fluids around hot spheres has also been investigated by UCB to measure Nusselt numbers in pebble-bed cores, using simulant oils and copper spheres. Finally, single-phase forced/natural circulation has been investigated using the scaled height, reduced flow area loops of the Compact Integral Effects Test facility at UCB and a multi-flow regime loop at UNM, using Dowtherm A oil. The scaling methodology and status of these ongoing experiments are described here

  9. Numerical Experiments on Advective Transport in Large Three-Dimensional Discrete Fracture Networks

    Science.gov (United States)

    Makedonska, N.; Painter, S. L.; Karra, S.; Gable, C. W.

    2013-12-01

    Modeling of flow and solute transport in discrete fracture networks is an important approach for understanding the migration of contaminants in impermeable hard rocks such as granite, where fractures provide dominant flow and transport pathways. The discrete fracture network (DFN) model attempts to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. An integrated DFN meshing [1], flow, and particle tracking [2] simulation capability that enables accurate flow and particle tracking simulation on large DFNs has recently been developed. The new capability has been used in numerical experiments on advective transport in large DFNs with tens of thousands of fractures and millions of computational cells. The modeling procedure starts from the fracture network generation using a stochastic model derived from site data. A high-quality computational mesh is then generated [1]. Flow is then solved using the highly parallel PFLOTRAN [3] code. PFLOTRAN uses the finite volume approach, which is locally mass conserving and thus eliminates mass balance problems during particle tracking. The flow solver provides the scalar fluxes on each control volume face. From the obtained fluxes the Darcy velocity is reconstructed for each node in the network [4]. Velocities can then be continuously interpolated to any point in the domain of interest, thus enabling random walk particle tracking. In order to describe the flow field on fractures intersections, the control volume cells on intersections are split into four planar polygons, where each polygon corresponds to a piece of a fracture near the intersection line. Thus

  10. Parallel 4-dimensional cellular automaton track finder for the CBM experiment

    Energy Technology Data Exchange (ETDEWEB)

    Akishina, Valentina [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); JINR Joint Institute for Nuclear Research, Dubna (Russian Federation); Kisel, Ivan [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Collaboration: CBM-Collaboration

    2016-07-01

    The CBM experiment at FAIR will focus on the measurement of rare probes at interaction rates up to 10 MHz. The beam will provide free stream of particles, so that information about different collisions may overlap in time. It requires the full online event reconstruction not only in space, but also in time, so-called 4D (4-dimensional) event building. This is a task of the First-Level Event Selection (FLES) package. The FLES reconstruction package consists of several modules: track finding, track fitting, short-lived particles finding, event building and selection. The Silicon Tracking System (STS) time measurement information was included into the Cellular Automaton (CA) track finder algorithm. The 4D track finder algorithm speed (8.5 ms per event in a time-slice) and efficiency is comparable with the event-based analysis. The CA track finder was fully parallelised inside the time-slice. The parallel version achieves a speed-up factor of 10.6 while parallelising between 10 Intel Xeon physical cores with a hyper-threading. The first version of event building based on 4D track finder was implemented.

  11. A high-power target experiment

    CERN Document Server

    Kirk, H G; Ludewig, H; Palmer, Robert; Samulyak, V; Simos, N; Tsang, Thomas; Bradshaw, T W; Drumm, Paul V; Edgecock, T R; Ivanyushenkov, Yury; Bennett, Roger; Efthymiopoulos, Ilias; Fabich, Adrian; Haseroth, H; Haug, F; Lettry, Jacques; Hayato, Y; Yoshimura, Koji; Gabriel, Tony A; Graves, Van; Spampinato, P; Haines, John; McDonald, Kirk T

    2005-01-01

    We describe an experiment designed as a proof-of-principle test for a target system capable of converting a 4 MW proton beam into a high-intensity muon beam suitable for incorporation into either a neutrino factory complex or a muon collider. The target system is based on exposing a free mercury jet to an intense proton beam in the presence of a high strength solenoidal field.

  12. Dimensional consistency achieved in high-performance synchronizing hubs

    Directory of Open Access Journals (Sweden)

    García, P.

    2013-02-01

    Full Text Available The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via (“2P2S” has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and process.

    Las tolerancias en componentes fabricados para la industria del automóvil son tan estrechas que cualquier modificación en las variables del proceso puede provocar que no se cumplan. Una disminución de las tolerancias dimensionales, puede significar una mejora en las propiedades de las piezas. Dependiendo de los requerimientos dimensionales y del material, distintas rutas de procesado pueden seguirse para encontrar un método de procesado robusto, que minimice costes y maximice la capacidad del proceso. En los últimos años, la tolerancia dimensional se ha ajustado gracias a métodos de procesado como el doble prensado/doble sinterizado (“2P2S”, método de gran precisión para conseguir estrechas tolerancias. En este trabajo, se muestra que los parámetros de procesado

  13. Unfolding methods in high-energy physics experiments

    International Nuclear Information System (INIS)

    Blobel, V.

    1985-01-01

    Distributions measured in high-energy physics experiments are often distorted or transformed by limited acceptance and finite resolution of the detectors. The unfolding of measured distributions is an important, but due to inherent instabilities a very difficult problem. Methods for unfolding, applicable for the analysis of high-energy physics experiments, and their properties are discussed. An introduction is given to the method of regularization. (orig.)

  14. Unfolding methods in high-energy physics experiments

    International Nuclear Information System (INIS)

    Blobel, V.

    1984-12-01

    Distributions measured in high-energy physics experiments are often distorted or transformed by limited acceptance and finite resolution of the detectors. The unfolding of measured distributions is an important, but due to inherent instabilities a very difficult problem. Methods for unfolding, applicable for the analysis of high-energy physics experiments, and their properties are discussed. An introduction is given to the method of regularization. (orig.)

  15. 1D to 3D dimensional crossover in the superconducting transition of the quasi-one-dimensional carbide superconductor Sc3CoC4.

    Science.gov (United States)

    He, Mingquan; Wong, Chi Ho; Shi, Dian; Tse, Pok Lam; Scheidt, Ernst-Wilhelm; Eickerling, Georg; Scherer, Wolfgang; Sheng, Ping; Lortz, Rolf

    2015-02-25

    The transition metal carbide superconductor Sc(3)CoC(4) may represent a new benchmark system of quasi-one-dimensional (quasi-1D) superconducting behavior. We investigate the superconducting transition of a high-quality single crystalline sample by electrical transport experiments. Our data show that the superconductor goes through a complex dimensional crossover below the onset T(c) of 4.5 K. First, a quasi-1D fluctuating superconducting state with finite resistance forms in the [CoC(4)](∞) ribbons which are embedded in a Sc matrix in this material. At lower temperature, the transversal Josephson or proximity coupling of neighboring ribbons establishes a 3D bulk superconducting state. This dimensional crossover is very similar to Tl(2)Mo(6)Se(6), which for a long time has been regarded as the most appropriate model system of a quasi-1D superconductor. Sc(3)CoC(4) appears to be even more in the 1D limit than Tl(2)Mo(6)Se(6).

  16. Two-dimensional imaging of Debye-Scherrer ring for tri-axial stress analysis of industrial materials

    International Nuclear Information System (INIS)

    Sasaki, T; Maruyama, Y; Ohba, H; Ejiri, S

    2014-01-01

    In this study, an application of the two-dimensional imaging technology to the X ray tri-axial stress analysis was studied. An image plate (IP) was used to obtain a Debye-Scherre ring and the image data was analized for determining stress. A new principle for stress analysis which is suitable to two-dimensional imaging data was used. For the verification of this two-dimensional imaging type X-ray stress measurement method, an experiment was conducted using a ferritic steel sample which was processed with a surface grinder. Tri-axial stress analysis was conducted to evaluate the sample. The conventional method for X-ray tri-axial stress analysis proposed by Dölle and Hauk was used to evaluate residual stress in order to compare with the present method. As a result, it was confirmed that a sufficiently highly precise and high-speed stress measurement was enabled with the two-dimensional imaging technology compared with the conventional method

  17. Three-dimensional laparoscopy vs 2-dimensional laparoscopy with high-definition technology for abdominal surgery

    DEFF Research Database (Denmark)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian

    2017-01-01

    BACKGROUND: This systematic review investigates newer generation 3-dimensional (3D) laparoscopy vs 2-dimensional (2D) laparoscopy in terms of error rating, performance time, and subjective assessment as early comparisons have shown contradictory results due to technological shortcomings. DATA...... Central Register of Controlled Trials database. CONCLUSIONS: Of 643 articles, 13 RCTs were included, of which 2 were clinical trials. Nine of 13 trials (69%) and 10 of 13 trials (77%) found a significant reduction in performance time and error, respectively, with the use of 3D-laparoscopy. Overall, 3D......-laparoscopy was found to be superior or equal to 2D-laparoscopy. All trials featuring subjective evaluation found a superiority of 3D-laparoscopy. More clinical RCTs are still awaited for the convincing results to be reproduced....

  18. Four-dimensional hilbert curves for R-trees

    DEFF Research Database (Denmark)

    Haverkort, Herman; Walderveen, Freek van

    2011-01-01

    Two-dimensional R-trees are a class of spatial index structures in which objects are arranged to enable fast window queries: report all objects that intersect a given query window. One of the most successful methods of arranging the objects in the index structure is based on sorting the objects...... according to the positions of their centers along a two-dimensional Hilbert space-filling curve. Alternatively, one may use the coordinates of the objects' bounding boxes to represent each object by a four-dimensional point, and sort these points along a four-dimensional Hilbert-type curve. In experiments...

  19. High-definition resolution three-dimensional imaging systems in laparoscopic radical prostatectomy: randomized comparative study with high-definition resolution two-dimensional systems.

    Science.gov (United States)

    Kinoshita, Hidefumi; Nakagawa, Ken; Usui, Yukio; Iwamura, Masatsugu; Ito, Akihiro; Miyajima, Akira; Hoshi, Akio; Arai, Yoichi; Baba, Shiro; Matsuda, Tadashi

    2015-08-01

    Three-dimensional (3D) imaging systems have been introduced worldwide for surgical instrumentation. A difficulty of laparoscopic surgery involves converting two-dimensional (2D) images into 3D images and depth perception rearrangement. 3D imaging may remove the need for depth perception rearrangement and therefore have clinical benefits. We conducted a multicenter, open-label, randomized trial to compare the surgical outcome of 3D-high-definition (HD) resolution and 2D-HD imaging in laparoscopic radical prostatectomy (LRP), in order to determine whether an LRP under HD resolution 3D imaging is superior to that under HD resolution 2D imaging in perioperative outcome, feasibility, and fatigue. One-hundred twenty-two patients were randomly assigned to a 2D or 3D group. The primary outcome was time to perform vesicourethral anastomosis (VUA), which is technically demanding and may include a number of technical difficulties considered in laparoscopic surgeries. VUA time was not significantly shorter in the 3D group (26.7 min, mean) compared with the 2D group (30.1 min, mean) (p = 0.11, Student's t test). However, experienced surgeons and 3D-HD imaging were independent predictors for shorter VUA times (p = 0.000, p = 0.014, multivariate logistic regression analysis). Total pneumoperitoneum time was not different. No conversion case from 3D to 2D or LRP to open RP was observed. Fatigue was evaluated by a simulation sickness questionnaire and critical flicker frequency. Results were not different between the two groups. Subjective feasibility and satisfaction scores were significantly higher in the 3D group. Using a 3D imaging system in LRP may have only limited advantages in decreasing operation times over 2D imaging systems. However, the 3D system increased surgical feasibility and decreased surgeons' effort levels without inducing significant fatigue.

  20. Integration of fringe projection and two-dimensional digital image correlation for three-dimensional displacements measurements

    Science.gov (United States)

    Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.

    2016-12-01

    A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.

  1. TH-CD-207A-07: Prediction of High Dimensional State Subject to Respiratory Motion: A Manifold Learning Approach

    International Nuclear Information System (INIS)

    Liu, W; Sawant, A; Ruan, D

    2016-01-01

    Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit more descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction

  2. The literary uses of high-dimensional space

    Directory of Open Access Journals (Sweden)

    Ted Underwood

    2015-12-01

    Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.

  3. Faithful representation of similarities among three-dimensional shapes in human vision.

    Science.gov (United States)

    Cutzu, F; Edelman, S

    1996-01-01

    Efficient and reliable classification of visual stimuli requires that their representations reside a low-dimensional and, therefore, computationally manageable feature space. We investigated the ability of the human visual system to derive such representations from the sensory input-a highly nontrivial task, given the million or so dimensions of the visual signal at its entry point to the cortex. In a series of experiments, subjects were presented with sets of parametrically defined shapes; the points in the common high-dimensional parameter space corresponding to the individual shapes formed regular planar (two-dimensional) patterns such as a triangle, a square, etc. We then used multidimensional scaling to arrange the shapes in planar configurations, dictated by their experimentally determined perceived similarities. The resulting configurations closely resembled the original arrangements of the stimuli in the parameter space. This achievement of the human visual system was replicated by a computational model derived from a theory of object representation in the brain, according to which similarities between objects, and not the geometry of each object, need to be faithfully represented. Images Fig. 3 PMID:8876260

  4. Two-dimensional magnetic field evolution measurements and plasma flow speed estimates from the coaxial thruster experiment

    International Nuclear Information System (INIS)

    Black, D.C.; Mayo, R.M.; Gerwin, R.A.; Schoenberg, K.F.; Scheuer, J.T.; Hoyt, R.P.; Henins, I.

    1994-01-01

    Local, time-dependent magnetic field measurements have been made in the Los Alamos coaxial thruster experiment (CTX) [C. W. Barnes et al., Phys. Fluids B 2, 1871 (1990); J. C. Fernandez et al., Nucl. Fusion 28, 1555 (1988)] using a 24 coil magnetic probe array (eight spatial positions, three axis probes). The CTX is a magnetized, coaxial plasma gun presently being used to investigate the viability of high pulsed power plasma thrusters for advanced electric propulsion. Previous efforts on this device have indicated that high pulsed power plasma guns are attractive candidates for advanced propulsion that employ ideal magnetohydrodynamic (MHD) plasma stream flow through self-formed magnetic nozzles. Indirect evidence of magnetic nozzle formation was obtained from plasma gun performance and measurements of directed axial velocities up to v z ∼10 7 cm/s. The purpose of this work is to make direct measurement of the time evolving magnetic field topology. The intent is to both identify that applied magnetic field distortion by the highly conductive plasma is occurring, and to provide insight into the details of discharge evolution. Data from a magnetic fluctuation probe array have been used to investigate the details of applied magnetic field deformation through the reconstruction of time-dependent flux profiles. Experimentally observed magnetic field line distortion has been compared to that predicted by a simple one-dimensional (1-D) model of the discharge channel. Such a comparison is utilized to estimate the axial plasma velocity in the thruster. Velocities determined in this manner are in approximate agreement with the predicted self-field magnetosonic speed and those measured by a time-of-flight spectrometer

  5. Atmospheric and dispersion modeling in areas of highly complex terrain employing a four-dimensional data assimilation technique

    International Nuclear Information System (INIS)

    Fast, J.D.; O'Steen, B.L.

    1994-01-01

    The results of this study indicate that the current data assimilation technique can have a positive impact on the mesoscale flow fields; however, care must be taken in its application to grids of relatively fine horizontal resolution. Continuous FDDA is a useful tool in producing high-resolution mesoscale analysis fields that can be used to (1) create a better initial conditions for mesoscale atmospheric models and (2) drive transport models for dispersion studies. While RAMS is capable of predicting the qualitative flow during this evening, additional experiments need to be performed to improve the prognostic forecasts made by RAMS and refine the FDDA procedure so that the overall errors are reduced even further. Despite the fact that a great deal of computational time is necessary in executing RAMS and LPDM in the configuration employed in this study, recent advances in workstations is making applications such as this more practical. As the speed of these machines increase in the next few years, it will become feasible to employ prognostic, three-dimensional mesoscale/transport models to routinely predict atmospheric dispersion of pollutants, even to highly complex terrain. For example, the version of RAMS in this study could be run in a ''nowcasting'' model that would continually assimilate local and regional observations as soon as they become available. The atmospheric physics in the model would be used to determine the wind field where no observations are available. The three-dimensional flow fields could be used as dynamic initial conditions for a model forecast. The output from this type of modeling system will have to be compared to existing diagnostic, mass-consistent models to determine whether the wind field and dispersion forecasts are significantly improved

  6. Two and dimensional heat analysis inside a high pressure electrical discharge tube

    International Nuclear Information System (INIS)

    Aghanajafi, C.; Dehghani, A. R.; Fallah Abbasi, M.

    2005-01-01

    This article represents the heat transfer analysis for a horizontal high pressure mercury steam tube. To get a more realistic numerical simulation, heat radiation at different wavelength width bands, has been used besides convection and conduction heat transfer. The analysis for different gases with different pressure in two and three dimensional cases has been investigated and the results compared with empirical and semi empirical values. The effect of the environmental temperature on the arc tube temperature is also studied

  7. Controlling chaos in low and high dimensional systems with periodic parametric perturbations

    International Nuclear Information System (INIS)

    Mirus, K.A.; Sprott, J.C.

    1998-06-01

    The effect of applying a periodic perturbation to an accessible parameter of various chaotic systems is examined. Numerical results indicate that perturbation frequencies near the natural frequencies of the unstable periodic orbits of the chaotic systems can result in limit cycles for relatively small perturbations. Such perturbations can also control or significantly reduce the dimension of high-dimensional systems. Initial application to the control of fluctuations in a prototypical magnetic fusion plasma device will be reviewed

  8. Dimensionality reduction of collective motion by principal manifolds

    Science.gov (United States)

    Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.

    2015-01-01

    While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.

  9. Gouy Phase Radial Mode Sorter for Light: Concepts and Experiments.

    Science.gov (United States)

    Gu, Xuemei; Krenn, Mario; Erhard, Manuel; Zeilinger, Anton

    2018-03-09

    We present an in principle lossless sorter for radial modes of light, using accumulated Gouy phases. The experimental setups have been found by a computer algorithm, and can be intuitively understood in a geometric way. Together with the ability to sort angular-momentum modes, we now have access to the complete two-dimensional transverse plane of light. The device can readily be used in multiplexing classical information. On a quantum level, it is an analog of the Stern-Gerlach experiment-significant for the discussion of fundamental concepts in quantum physics. As such, it can be applied in high-dimensional and multiphotonic quantum experiments.

  10. The songbird syrinx morphome: a three-dimensional, high-resolution, interactive morphological map of the zebra finch vocal organ

    Directory of Open Access Journals (Sweden)

    Düring Daniel N

    2013-01-01

    Full Text Available Abstract Background Like human infants, songbirds learn their species-specific vocalizations through imitation learning. The birdsong system has emerged as a widely used experimental animal model for understanding the underlying neural mechanisms responsible for vocal production learning. However, how neural impulses are translated into the precise motor behavior of the complex vocal organ (syrinx to create song is poorly understood. First and foremost, we lack a detailed understanding of syringeal morphology. Results To fill this gap we combined non-invasive (high-field magnetic resonance imaging and micro-computed tomography and invasive techniques (histology and micro-dissection to construct the annotated high-resolution three-dimensional dataset, or morphome, of the zebra finch (Taeniopygia guttata syrinx. We identified and annotated syringeal cartilage, bone and musculature in situ in unprecedented detail. We provide interactive three-dimensional models that greatly improve the communication of complex morphological data and our understanding of syringeal function in general. Conclusions Our results show that the syringeal skeleton is optimized for low weight driven by physiological constraints on song production. The present refinement of muscle organization and identity elucidates how apposed muscles actuate different syringeal elements. Our dataset allows for more precise predictions about muscle co-activation and synergies and has important implications for muscle activity and stimulation experiments. We also demonstrate how the syrinx can be stabilized during song to reduce mechanical noise and, as such, enhance repetitive execution of stereotypic motor patterns. In addition, we identify a cartilaginous structure suited to play a crucial role in the uncoupling of sound frequency and amplitude control, which permits a novel explanation of the evolutionary success of songbirds.

  11. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.

    Science.gov (United States)

    Wang, Xueyi

    2012-02-08

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.

  12. Thermal Investigation of Three-Dimensional GaN-on-SiC High Electron Mobility Transistors

    Science.gov (United States)

    2017-07-01

    University of L’Aquila, (2011). 23 Rao, H. & Bosman, G. Hot-electron induced defect generation in AlGaN/GaN high electron mobility transistors. Solid...AFRL-RY-WP-TR-2017-0143 THERMAL INVESTIGATION OF THREE- DIMENSIONAL GaN-on-SiC HIGH ELECTRON MOBILITY TRANSISTORS Qing Hao The University of Arizona...clarification memorandum dated 16 Jan 09. This report is available to the general public, including foreign nationals. Copies may be obtained from the

  13. Noise-induced drift in two-dimensional anisotropic systems

    Science.gov (United States)

    Farago, Oded

    2017-10-01

    We study the isothermal Brownian dynamics of a particle in a system with spatially varying diffusivity. Due to the heterogeneity of the system, the particle's mean displacement does not vanish even if it does not experience any physical force. This phenomenon has been termed "noise-induced drift," and has been extensively studied for one-dimensional systems. Here, we examine the noise-induced drift in a two-dimensional anisotropic system, characterized by a symmetric diffusion tensor with unequal diagonal elements. A general expression for the mean displacement vector is derived and presented as a sum of two vectors, depicting two distinct drifting effects. The first vector describes the tendency of the particle to drift toward the high diffusivity side in each orthogonal principal diffusion direction. This is a generalization of the well-known expression for the noise-induced drift in one-dimensional systems. The second vector represents a novel drifting effect, not found in one-dimensional systems, originating from the spatial rotation in the directions of the principal axes. The validity of the derived expressions is verified by using Langevin dynamics simulations. As a specific example, we consider the relative diffusion of two transmembrane proteins, and demonstrate that the average distance between them increases at a surprisingly fast rate of several tens of micrometers per second.

  14. Machine Learning Control For Highly Reconfigurable High-Order Systems

    Science.gov (United States)

    2015-01-02

    calibration and applications,” Mechatronics and Embedded Systems and Applications (MESA), 2010 IEEE/ASME International Conference on, IEEE, 2010, pp. 38–43...AFRL-OSR-VA-TR-2015-0012 MACHINE LEARNING CONTROL FOR HIGHLY RECONFIGURABLE HIGH-ORDER SYSTEMS John Valasek TEXAS ENGINEERING EXPERIMENT STATION...DIMENSIONAL RECONFIGURABLE SYSTEMS FA9550-11-1-0302 Period of Performance 1 July 2011 – 29 September 2014 John Valasek Aerospace Engineering

  15. Phase transitions in two-dimensional systems

    International Nuclear Information System (INIS)

    Salinas, S.R.A.

    1983-01-01

    Some experiences are related using synchrotron radiation beams, to characterize solid-liquid (fusion) and commensurate solid-uncommensurate solid transitions in two-dimensional systems. Some ideas involved in the modern theories of two-dimensional fusion are shortly exposed. The systems treated consist of noble gases (Kr,Ar,Xe) adsorbed in the basal plane of graphite and thin films formed by some liquid crystal shells. (L.C.) [pt

  16. Thermodynamics of noncommutative high-dimensional AdS black holes with non-Gaussian smeared matter distributions

    CERN Document Server

    Miao, Yan-Gang

    2016-01-01

    Considering non-Gaussian smeared matter distributions, we investigate thermodynamic behaviors of the noncommutative high-dimensional Schwarzschild-Tangherlini anti-de Sitter black hole, and obtain the condition for the existence of extreme black holes. We indicate that the Gaussian smeared matter distribution, which is a special case of non-Gaussian smeared matter distributions, is not applicable for the 6- and higher-dimensional black holes due to the hoop conjecture. In particular, the phase transition is analyzed in detail. Moreover, we point out that the Maxwell equal area law maintains for the noncommutative black hole with the Hawking temperature within a specific range, but fails with the Hawking temperature beyond this range.

  17. Problems of high temperature superconductivity in three-dimensional systems

    Energy Technology Data Exchange (ETDEWEB)

    Geilikman, B T

    1973-01-01

    A review is given of more recent papers on this subject. These papers have dealt mainly with two-dimensional systems. The present paper extends the treatment to three-dimensional systems, under the following headings: systems with collective electrons of one group and localized electrons of another group (compounds of metals with non-metals-dielectrics, organic substances, undoped semiconductors, molecular crystals); experimental investigations of superconducting compounds of metals with organic compounds, dielectrics, semiconductors, and semi-metals; and systems with two or more groups of collective electrons. Mechanics are considered and models are derived. 86 references.

  18. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  19. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    International Nuclear Information System (INIS)

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  20. Three-dimensional reconstruction of highly complex microscopic samples using scanning electron microscopy and optical flow estimation.

    Directory of Open Access Journals (Sweden)

    Ahmadreza Baghaie

    Full Text Available Scanning Electron Microscope (SEM as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D. In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.

  1. Finding and Visualizing Relevant Subspaces for Clustering High-Dimensional Astronomical Data Using Connected Morphological Operators

    NARCIS (Netherlands)

    Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.

    2010-01-01

    Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.

  2. Graphene quantum dots-three-dimensional graphene composites for high-performance supercapacitors.

    Science.gov (United States)

    Chen, Qing; Hu, Yue; Hu, Chuangang; Cheng, Huhu; Zhang, Zhipan; Shao, Huibo; Qu, Liangti

    2014-09-28

    Graphene quantum dots (GQDs) have been successfully deposited onto the three-dimensional graphene (3DG) by a benign electrochemical method and the ordered 3DG structure remains intact after the uniform deposition of GQDs. In addition, the capacitive properties of the as-formed GQD-3DG composites are evaluated in symmetrical supercapacitors. It is found that the supercapacitor fabricated from the GQD-3DG composite is highly stable and exhibits a high specific capacitance of 268 F g(-1), representing a more than 90% improvement over that of the supercapacitor made from pure 3DG electrodes (136 F g(-1)). Owing to the convenience of the current method, it can be further used in other well-defined electrode materials, such as carbon nanotubes, carbon aerogels and conjugated polymers to improve the performance of the supercapacitors.

  3. Face and content validation of a novel three-dimensional printed temporal bone for surgical skills development.

    Science.gov (United States)

    Da Cruz, M J; Francis, H W

    2015-07-01

    To assess the face and content validity of a novel synthetic, three-dimensional printed temporal bone for surgical skills development and training. A synthetic temporal bone was printed using composite materials and three-dimensional printing technology. Surgical trainees were asked to complete three structured temporal bone dissection exercises. Attitudes and impressions were then assessed using a semi-structured questionnaire. Previous cadaver and real operating experiences were used as a reference. Trainees' experiences of the synthetic temporal bone were analysed in terms of four domains: anatomical realism, usefulness as a training tool, task-based usefulness and overall reactions. Responses across all domains indicated a high degree of acceptance, suggesting that the three-dimensional printed temporal bone was a useful tool in skills development. A sophisticated three-dimensional printed temporal bone that demonstrates face and content validity was developed. The efficiency in cost savings coupled with low associated biohazards make it likely that the printed temporal bone will be incorporated into traditional temporal bone skills development programmes in the near future.

  4. Three-Dimensional Vibration Isolator for Suppressing High-Frequency Responses for Sage III Contamination Monitoring Package (CMP)

    Science.gov (United States)

    Li, Y.; Cutright, S.; Dyke, R.; Templeton, J.; Gasbarre, J.; Novak, F.

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment (SAGE) III - International Space Station (ISS) instrument will be used to study ozone, providing global, long-term measurements of key components of the Earth's atmosphere for the continued health of Earth and its inhabitants. SAGE III is launched into orbit in an inverted configuration on SpaceX;s Falcon 9 launch vehicle. As one of its four supporting elements, a Contamination Monitoring Package (CMP) mounted to the top panel of the Interface Adapter Module (IAM) box experiences high-frequency response due to structural coupling between the two structures during the SpaceX launch. These vibrations, which were initially observed in the IAM Engineering Development Unit (EDU) test and later verified through finite element analysis (FEA) for the SpaceX launch loads, may damage the internal electronic cards and the Thermoelectric Quartz Crystal Microbalance (TQCM) sensors mounted on the CMP. Three-dimensional (3D) vibration isolators were required to be inserted between the CMP and IAM interface in order to attenuate the high frequency vibrations without resulting in any major changes to the existing system. Wire rope isolators were proposed as the isolation system between the CMP and IAM due to the low impact to design. Most 3D isolation systems are designed for compression and roll, therefore little dynamic data was available for using wire rope isolators in an inverted or tension configuration. From the isolator FEA and test results, it is shown that by using the 3D wire rope isolators, the CMP high-frequency responses have been suppressed by several orders of magnitude over a wide excitation frequency range. Consequently, the TQCM sensor responses are well below their qualification environments. It is indicated that these high-frequency responses due to the typical instrument structural coupling can be significantly suppressed by a vibration passive control using the 3D vibration isolator. Thermal and contamination

  5. A high-order integral solver for scalar problems of diffraction by screens and apertures in three-dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, Oscar P., E-mail: obruno@caltech.edu; Lintner, Stéphane K.

    2013-11-01

    We present a novel methodology for the numerical solution of problems of diffraction by infinitely thin screens in three-dimensional space. Our approach relies on new integral formulations as well as associated high-order quadrature rules. The new integral formulations involve weighted versions of the classical integral operators related to the thin-screen Dirichlet and Neumann problems as well as a generalization to the open-surface problem of the classical Calderón formulae. The high-order quadrature rules we introduce for these operators, in turn, resolve the multiple Green function and edge singularities (which occur at arbitrarily close distances from each other, and which include weakly singular as well as hypersingular kernels) and thus give rise to super-algebraically fast convergence as the discretization sizes are increased. When used in conjunction with Krylov-subspace linear algebra solvers such as GMRES, the resulting solvers produce results of high accuracy in small numbers of iterations for low and high frequencies alike. We demonstrate our methodology with a variety of numerical results for screen and aperture problems at high frequencies—including simulation of classical experiments such as the diffraction by a circular disc (featuring in particular the famous Poisson spot), evaluation of interference fringes resulting from diffraction across two nearby circular apertures, as well as solution of problems of scattering by more complex geometries consisting of multiple scatterers and cavities.

  6. Carbon doped GaAs/AlGaAs heterostructures with high mobility two dimensional hole gas

    Energy Technology Data Exchange (ETDEWEB)

    Hirmer, Marika; Bougeard, Dominique; Schuh, Dieter [Institut fuer Experimentelle und Angewandte Physik, Universitaet Regensburg, D 93040 Regensburg (Germany); Wegscheider, Werner [Laboratorium fuer Festkoerperphysik, ETH Zuerich, 8093 Zuerich (Switzerland)

    2011-07-01

    Two dimensional hole gases (2DHG) with high carrier mobilities are required for both fundamental research and possible future ultrafast spintronic devices. Here, two different types of GaAs/AlGaAs heterostructures hosting a 2DHG were investigated. The first structure is a GaAs QW embedded in AlGaAs barrier grown by molecular beam epitaxy with carbon-doping only at one side of the quantum well (QW) (single side doped, ssd), while the second structure is similar but with symmetrically arranged doping layers on both sides of the QW (double side doped, dsd). The ssd-structure shows hole mobilities up to 1.2*10{sup 6} cm{sup 2}/Vs which are achieved after illumination. In contrast, the dsd-structure hosts a 2DHG with mobility up to 2.05*10{sup 6} cm{sup 2}/Vs. Here, carrier mobility and carrier density is not affected by illuminating the sample. Both samples showed distinct Shubnikov-de-Haas oscillations and fractional quantum-Hall-plateaus in magnetotransport experiments done at 20mK, indicating the high quality of the material. In addition, the influence of different temperature profiles during growth and the influence of the Al content of the barrier Al{sub x}Ga{sub 1-x}As on carrier concentration and mobility were investigated and are presented here.

  7. Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication

    Science.gov (United States)

    Peng, Qi; Guan, Weipeng; Wu, Yuxiang; Cai, Ye; Xie, Canyu; Wang, Pengfei

    2018-01-01

    This paper proposes a three-dimensional (3-D) high-precision indoor positioning strategy using Tabu search based on visible light communication. Tabu search is a powerful global optimization algorithm, and the 3-D indoor positioning can be transformed into an optimal solution problem. Therefore, in the 3-D indoor positioning, the optimal receiver coordinate can be obtained by the Tabu search algorithm. For all we know, this is the first time the Tabu search algorithm is applied to visible light positioning. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) and transmits the ID information. When the receiver detects optical signals with ID information from different LEDs, using the global optimization of the Tabu search algorithm, the 3-D high-precision indoor positioning can be realized when the fitness value meets certain conditions. Simulation results show that the average positioning error is 0.79 cm, and the maximum error is 5.88 cm. The extended experiment of trajectory tracking also shows that 95.05% positioning errors are below 1.428 cm. It can be concluded from the data that the 3-D indoor positioning based on the Tabu search algorithm achieves the requirements of centimeter level indoor positioning. The algorithm used in indoor positioning is very effective and practical and is superior to other existing methods for visible light indoor positioning.

  8. Evaluation of aqueductal patency in patients with hydrocephalus: Three-dimensional high-sampling efficiency technique(SPACE) versus two-dimensional turbo spin echo at 3 Tesla

    International Nuclear Information System (INIS)

    Ucar, Murat; Guryildirim, Melike; Tokgoz, Nil; Kilic, Koray; Borcek, Alp; Oner, Yusuf; Akkan, Koray; Tali, Turgut

    2014-01-01

    To compare the accuracy of diagnosing aqueductal patency and image quality between high spatial resolution three-dimensional (3D) high-sampling-efficiency technique (sampling perfection with application optimized contrast using different flip angle evolutions [SPACE]) and T2-weighted (T2W) two-dimensional (2D) turbo spin echo (TSE) at 3-T in patients with hydrocephalus. This retrospective study included 99 patients diagnosed with hydrocephalus. T2W 3D-SPACE was added to the routine sequences which consisted of T2W 2D-TSE, 3D-constructive interference steady state (CISS), and cine phase-contrast MRI (PC-MRI). Two radiologists evaluated independently the patency of cerebral aqueduct and image quality on the T2W 2D-TSE and T2W 3D-SPACE. PC-MRI and 3D-CISS were used as the reference for aqueductal patency and image quality, respectively. Inter-observer agreement was calculated using kappa statistics. The evaluation of the aqueductal patency by T2W 3D-SPACE and T2W 2D-TSE were in agreement with PC-MRI in 100% (99/99; sensitivity, 100% [83/83]; specificity, 100% [16/16]) and 83.8% (83/99; sensitivity, 100% [67/83]; specificity, 100% [16/16]), respectively (p < 0.001). No significant difference in image quality between T2W 2D-TSE and T2W 3D-SPACE (p = 0.056) occurred. The kappa values for inter-observer agreement were 0.714 for T2W 2D-TSE and 0.899 for T2W 3D-SPACE. Three-dimensional-SPACE is superior to 2D-TSE for the evaluation of aqueductal patency in hydrocephalus. T2W 3D-SPACE may hold promise as a highly accurate alternative treatment to PC-MRI for the physiological and morphological evaluation of aqueductal patency.

  9. Evaluation of aqueductal patency in patients with hydrocephalus: Three-dimensional high-sampling efficiency technique(SPACE) versus two-dimensional turbo spin echo at 3 Tesla

    Energy Technology Data Exchange (ETDEWEB)

    Ucar, Murat; Guryildirim, Melike; Tokgoz, Nil; Kilic, Koray; Borcek, Alp; Oner, Yusuf; Akkan, Koray; Tali, Turgut [School of Medicine, Gazi University, Ankara (Turkey)

    2014-12-15

    To compare the accuracy of diagnosing aqueductal patency and image quality between high spatial resolution three-dimensional (3D) high-sampling-efficiency technique (sampling perfection with application optimized contrast using different flip angle evolutions [SPACE]) and T2-weighted (T2W) two-dimensional (2D) turbo spin echo (TSE) at 3-T in patients with hydrocephalus. This retrospective study included 99 patients diagnosed with hydrocephalus. T2W 3D-SPACE was added to the routine sequences which consisted of T2W 2D-TSE, 3D-constructive interference steady state (CISS), and cine phase-contrast MRI (PC-MRI). Two radiologists evaluated independently the patency of cerebral aqueduct and image quality on the T2W 2D-TSE and T2W 3D-SPACE. PC-MRI and 3D-CISS were used as the reference for aqueductal patency and image quality, respectively. Inter-observer agreement was calculated using kappa statistics. The evaluation of the aqueductal patency by T2W 3D-SPACE and T2W 2D-TSE were in agreement with PC-MRI in 100% (99/99; sensitivity, 100% [83/83]; specificity, 100% [16/16]) and 83.8% (83/99; sensitivity, 100% [67/83]; specificity, 100% [16/16]), respectively (p < 0.001). No significant difference in image quality between T2W 2D-TSE and T2W 3D-SPACE (p = 0.056) occurred. The kappa values for inter-observer agreement were 0.714 for T2W 2D-TSE and 0.899 for T2W 3D-SPACE. Three-dimensional-SPACE is superior to 2D-TSE for the evaluation of aqueductal patency in hydrocephalus. T2W 3D-SPACE may hold promise as a highly accurate alternative treatment to PC-MRI for the physiological and morphological evaluation of aqueductal patency.

  10. High-resolution liquid patterns via three-dimensional droplet shape control.

    Science.gov (United States)

    Raj, Rishi; Adera, Solomon; Enright, Ryan; Wang, Evelyn N

    2014-09-25

    Understanding liquid dynamics on surfaces can provide insight into nature's design and enable fine manipulation capability in biological, manufacturing, microfluidic and thermal management applications. Of particular interest is the ability to control the shape of the droplet contact area on the surface, which is typically circular on a smooth homogeneous surface. Here, we show the ability to tailor various droplet contact area shapes ranging from squares, rectangles, hexagons, octagons, to dodecagons via the design of the structure or chemical heterogeneity on the surface. We simultaneously obtain the necessary physical insights to develop a universal model for the three-dimensional droplet shape by characterizing the droplet side and top profiles. Furthermore, arrays of droplets with controlled shapes and high spatial resolution can be achieved using this approach. This liquid-based patterning strategy promises low-cost fabrication of integrated circuits, conductive patterns and bio-microarrays for high-density information storage and miniaturized biochips and biosensors, among others.

  11. Volume scanning three-dimensional display with an inclined two-dimensional display and a mirror scanner

    Science.gov (United States)

    Miyazaki, Daisuke; Kawanishi, Tsuyoshi; Nishimura, Yasuhiro; Matsushita, Kenji

    2001-11-01

    A new three-dimensional display system based on a volume-scanning method is demonstrated. To form a three-dimensional real image, an inclined two-dimensional image is rapidly moved with a mirror scanner while the cross-section patterns of a three-dimensional object are displayed sequentially. A vector-scan CRT display unit is used to obtain a high-resolution image. An optical scanning system is constructed with concave mirrors and a galvanometer mirror. It is confirmed that three-dimensional images, formed by the experimental system, satisfy all the criteria for human stereoscopic vision.

  12. Inelastic X-ray scattering experiments at extreme conditions: high temperatures and high pressures

    Directory of Open Access Journals (Sweden)

    S.Hosokawa

    2008-03-01

    Full Text Available In this article, we review the present status of experimental techniques under extreme conditions of high temperature and high pressure used for inelastic X-ray scattering (IXS experiments of liquid metals, semiconductors, molten salts, molecular liquids, and supercritical water and methanol. For high temperature experiments, some types of single-crystal sapphire cells were designed depending on the temperature of interest and the sample thickness for the X-ray transmission. Single-crystal diamond X-ray windows attached to the externally heated high-pressure vessel were used for the IXS experiment of supercritical water and methanol. Some typical experimental results are also given, and the perspective of IXS technique under extreme conditions is discussed.

  13. Sensitivity improvement for correlations involving arginine side-chain Nε/Hε resonances in multi-dimensional NMR experiments using broadband 15N 180o pulses

    International Nuclear Information System (INIS)

    Iwahara, Junji; Clore, G. Marius

    2006-01-01

    Due to practical limitations in available 15 N rf field strength, imperfections in 15 N 180 o pulses arising from off-resonance effects can result in significant sensitivity loss, even if the chemical shift offset is relatively small. Indeed, in multi-dimensional NMR experiments optimized for protein backbone amide groups, cross-peaks arising from the Arg guanidino 15 Nε (∼85 ppm) are highly attenuated by the presence of multiple INEPT transfer steps. To improve the sensitivity for correlations involving Arg Nε-Hε groups, we have incorporated 15 N broadband 180 deg. pulses into 3D 15 N-separated NOE-HSQC and HNCACB experiments. Two 15 N-WURST pulses incorporated at the INEPT transfer steps of the 3D 15 N-separated NOE-HSQC pulse sequence resulted in a ∼1.5-fold increase in sensitivity for the Arg Nε-Hε signals at 800 MHz. For the 3D HNCACB experiment, five 15 N Abramovich-Vega pulses were incorporated for broadband inversion and refocusing, and the sensitivity of Arg 1 Hε- 15 Nε- 13 Cγ/ 13 Cδ correlation peaks was enhanced by a factor of ∼1.7 at 500 MHz. These experiments eliminate the necessity for additional experiments to assign Arg 1 Hε and 15 Nε resonances. In addition, the increased sensitivity afforded for the detection of NOE cross-peaks involving correlations with the 15 Nε/ 1 Hε of Arg in 3D 15 N-separated NOE experiments should prove to be very useful for structural analysis of interactions involving Arg side-chains

  14. Quasi-two-dimensional metallic hydrogen in diphosphide at a high pressure

    International Nuclear Information System (INIS)

    Degtyarenko, N. N.; Mazur, E. A.

    2016-01-01

    The structural, electronic, phonon, and other characteristics of the normal phases of phosphorus hydrides with stoichiometry PH k are analyzed. The properties of the initial substance, namely, diphosphine are calculated. In contrast to phosphorus hydrides with stoichiometry PH 3 , a quasi-two-dimensional phosphorus-stabilized lattice of metallic hydrogen can be formed in this substance during hydrostatic compression at a high pressure. The formed structure with H–P–H elements is shown to be locally stable in phonon spectrum, i.e., to be metastable. The properties of diphosphine are compared with the properties of similar structures of sulfur hydrides.

  15. Computing and visualizing time-varying merge trees for high-dimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  16. Quasi-two-dimensional metallic hydrogen in diphosphide at a high pressure

    Energy Technology Data Exchange (ETDEWEB)

    Degtyarenko, N. N.; Mazur, E. A., E-mail: eugen-mazur@mail.ru [National Research Nuclear University MEPhI (Russian Federation)

    2016-08-15

    The structural, electronic, phonon, and other characteristics of the normal phases of phosphorus hydrides with stoichiometry PH{sub k} are analyzed. The properties of the initial substance, namely, diphosphine are calculated. In contrast to phosphorus hydrides with stoichiometry PH{sub 3}, a quasi-two-dimensional phosphorus-stabilized lattice of metallic hydrogen can be formed in this substance during hydrostatic compression at a high pressure. The formed structure with H–P–H elements is shown to be locally stable in phonon spectrum, i.e., to be metastable. The properties of diphosphine are compared with the properties of similar structures of sulfur hydrides.

  17. Evaluating Clustering in Subspace Projections of High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Günnemann, Stephan; Assent, Ira

    2009-01-01

    Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...

  18. High-order harmonic generation from a two-dimensional band structure

    Science.gov (United States)

    Jin, Jian-Zhao; Xiao, Xiang-Ru; Liang, Hao; Wang, Mu-Xue; Chen, Si-Ge; Gong, Qihuang; Peng, Liang-You

    2018-04-01

    In the past few years, harmonic generation in solids has attracted tremendous attention. Recently, some experiments of two-dimensional (2D) monolayer or few-layer materials have been carried out. These studies demonstrated that harmonic generation in the 2D case shows a strong dependence on the laser's orientation and ellipticity, which calls for a quantitative theoretical interpretation. In this work, we carry out a systematic study on the harmonic generation from a 2D band structure based on a numerical solution to the time-dependent Schrödinger equation. By comparing with the 1D case, we find that the generation dynamics can have a significant difference due to the existence of many crossing points in the 2D band structure. In particular, the higher conduction bands can be excited step by step via these crossing points and the total contribution of the harmonic is given by the mixing of transitions between different clusters of conduction bands to the valence band. We also present the orientation dependence of the harmonic yield on the laser polarization direction.

  19. Cooperative simulation of lithography and topography for three-dimensional high-aspect-ratio etching

    Science.gov (United States)

    Ichikawa, Takashi; Yagisawa, Takashi; Furukawa, Shinichi; Taguchi, Takafumi; Nojima, Shigeki; Murakami, Sadatoshi; Tamaoki, Naoki

    2018-06-01

    A topography simulation of high-aspect-ratio etching considering transports of ions and neutrals is performed, and the mechanism of reactive ion etching (RIE) residues in three-dimensional corner patterns is revealed. Limited ion flux and CF2 diffusion from the wide space of the corner is found to have an effect on the RIE residues. Cooperative simulation of lithography and topography is used to solve the RIE residue problem.

  20. Five-dimensional visualization of phase transition in BiNiO3 under high pressure

    International Nuclear Information System (INIS)

    Liu, Yijin; Wang, Junyue; Yang, Wenge; Azuma, Masaki; Mao, Wendy L.

    2014-01-01

    Colossal negative thermal expansion was recently discovered in BiNiO 3 associated with a low density to high density phase transition under high pressure. The varying proportion of co-existing phases plays a key role in the macroscopic behavior of this material. Here, we utilize a recently developed X-ray Absorption Near Edge Spectroscopy Tomography method and resolve the mixture of high/low pressure phases as a function of pressure at tens of nanometer resolution taking advantage of the charge transfer during the transition. This five-dimensional (X, Y, Z, energy, and pressure) visualization of the phase boundary provides a high resolution method to study the interface dynamics of high/low pressure phase

  1. Quasi-two-dimensional holography

    International Nuclear Information System (INIS)

    Kutzner, J.; Erhard, A.; Wuestenberg, H.; Zimpfer, J.

    1980-01-01

    The acoustical holography with numerical reconstruction by area scanning is memory- and time-intensive. With the experiences by the linear holography we tried to derive a scanning for the evaluating of the two-dimensional flaw-sizes. In most practical cases it is sufficient to determine the exact depth extension of a flaw, whereas the accuracy of the length extension is less critical. For this reason the applicability of the so-called quasi-two-dimensional holography is appropriate. The used sound field given by special probes is divergent in the inclined plane and light focussed in the perpendicular plane using cylindrical lenses. (orig.) [de

  2. Dimensional BCS-BEC crossover in ultracold Fermi gases

    Energy Technology Data Exchange (ETDEWEB)

    Boettcher, Igor

    2014-12-10

    We investigate thermodynamics and phase structure of ultracold Fermi gases, which can be realized and measured in the laboratory with modern trapping techniques. We approach the subject from a both theoretical and experimental perspective. Central to the analysis is the systematic comparison of the BCS-BEC crossover of two-component fermions in both three and two dimensions. A dimensional reduction can be achieved in experiments by means of highly anisotropic traps. The Functional Renormalization Group (FRG) allows for a description of both cases in a unified theoretical framework. In three dimensions we discuss with the FRG the influence of high momentum particles onto the density, extend previous approaches to the Unitary Fermi Gas to reach quantitative precision, and study the breakdown of superfluidity due to an asymmetry in the population of the two fermion components. In this context we also investigate the stability of the Sarma phase. For the two-dimensional system scattering theory in reduced dimension plays an important role. We present both the theoretically as well as experimentally relevant aspects thereof. After a qualitative analysis of the phase diagram and the equation of state in two dimensions with the FRG we describe the experimental determination of the phase diagram of the two-dimensional BCS-BEC crossover in collaboration with the group of S. Jochim at PI Heidelberg.

  3. Quantum corrections to conductivity observed at intermediate magnetic fields in a high mobility GaAs/AlGaAs 2-dimensional electron gas

    International Nuclear Information System (INIS)

    Taboryski, R.; Veje, E.; Lindelof, P.E.

    1990-01-01

    Magnetoresistance with the field perpendicular to the 2-dimensional electron gas in a high mobility GaAs/AlGaAs heterostructure at low temperatures is studied. At the lowest magnetic field we observe the weak localization. At magnetic fields, where the product of the mobility and the magnetic field is of the order of unity, the quantum correction to conductivity due to the electron-electron interaction is as a source of magnetoresistance. A consistent analysis of experiments in this regime is for the first time performed. In addition to the well known electron-electron term with the expected temperature dependence, we find a new type of temperature independent quantum correction, which varies logarithmically with mobility. (orig.)

  4. The Pulsed High Density Experiment (PHDX) Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John P. [Univ. of Washington, Seattle, WA (United States); Andreason, Samuel [Univ. of Washington, Seattle, WA (United States)

    2017-04-27

    The purpose of this paper is to present the conclusions that can be drawn from the Field Reversed Configuration (FRC) formation experiments conducted on the Pulsed High Density experiment (PHD) at the University of Washington. The experiment is ongoing. The experimental goal for this first stage of PHD was to generate a stable, high flux (>10 mWb), high energy (>10 KJ) target FRC. Such results would be adequate as a starting point for several later experiments. This work focuses on experimental implementation and the results of the first four month run. Difficulties were encountered due to the initial on-axis plasma ionization source. Flux trapping with this ionization source acting alone was insufficient to accomplish experimental objectives. Additional ionization methods were utilized to overcome this difficulty. A more ideal plasma source layout is suggested and will be explored during a forthcoming work.

  5. Growing three-dimensional biomorphic graphene powders using naturally abundant diatomite templates towards high solution processability

    Science.gov (United States)

    Chen, Ke; Li, Cong; Shi, Liurong; Gao, Teng; Song, Xiuju; Bachmatiuk, Alicja; Zou, Zhiyu; Deng, Bing; Ji, Qingqing; Ma, Donglin; Peng, Hailin; Du, Zuliang; Rümmeli, Mark Hermann; Zhang, Yanfeng; Liu, Zhongfan

    2016-11-01

    Mass production of high-quality graphene with low cost is the footstone for its widespread practical applications. We present herein a self-limited growth approach for producing graphene powders by a small-methane-flow chemical vapour deposition process on naturally abundant and industrially widely used diatomite (biosilica) substrates. Distinct from the chemically exfoliated graphene, thus-produced biomorphic graphene is highly crystallized with atomic layer-thickness controllability, structural designability and less noncarbon impurities. In particular, the individual graphene microarchitectures preserve a three-dimensional naturally curved surface morphology of original diatom frustules, effectively overcoming the interlayer stacking and hence giving excellent dispersion performance in fabricating solution-processible electrodes. The graphene films derived from as-made graphene powders, compatible with either rod-coating, or inkjet and roll-to-roll printing techniques, exhibit much higher electrical conductivity (~110,700 S m-1 at 80% transmittance) than previously reported solution-based counterparts. This work thus puts forward a practical route for low-cost mass production of various powdery two-dimensional materials.

  6. Thermophysical and Mechanical Properties of Granite and Its Effects on Borehole Stability in High Temperature and Three-Dimensional Stress

    Directory of Open Access Journals (Sweden)

    Wang Yu

    2014-01-01

    Full Text Available When exploiting the deep resources, the surrounding rock readily undergoes the hole shrinkage, borehole collapse, and loss of circulation under high temperature and high pressure. A series of experiments were conducted to discuss the compressional wave velocity, triaxial strength, and permeability of granite cored from 3500 meters borehole under high temperature and three-dimensional stress. In light of the coupling of temperature, fluid, and stress, we get the thermo-fluid-solid model and governing equation. ANSYS-APDL was also used to stimulate the temperature influence on elastic modulus, Poisson ratio, uniaxial compressive strength, and permeability. In light of the results, we establish a temperature-fluid-stress model to illustrate the granite’s stability. The compressional wave velocity and elastic modulus, decrease as the temperature rises, while poisson ratio and permeability of granite increase. The threshold pressure and temperature are 15 MPa and 200°C, respectively. The temperature affects the fracture pressure more than the collapse pressure, but both parameters rise with the increase of temperature. The coupling of thermo-fluid-solid, greatly impacting the borehole stability, proves to be a good method to analyze similar problems of other formations.

  7. Thermophysical and mechanical properties of granite and its effects on borehole stability in high temperature and three-dimensional stress.

    Science.gov (United States)

    Wang, Yu; Liu, Bao-lin; Zhu, Hai-yan; Yan, Chuan-liang; Li, Zhi-jun; Wang, Zhi-qiao

    2014-01-01

    When exploiting the deep resources, the surrounding rock readily undergoes the hole shrinkage, borehole collapse, and loss of circulation under high temperature and high pressure. A series of experiments were conducted to discuss the compressional wave velocity, triaxial strength, and permeability of granite cored from 3500 meters borehole under high temperature and three-dimensional stress. In light of the coupling of temperature, fluid, and stress, we get the thermo-fluid-solid model and governing equation. ANSYS-APDL was also used to stimulate the temperature influence on elastic modulus, Poisson ratio, uniaxial compressive strength, and permeability. In light of the results, we establish a temperature-fluid-stress model to illustrate the granite's stability. The compressional wave velocity and elastic modulus, decrease as the temperature rises, while poisson ratio and permeability of granite increase. The threshold pressure and temperature are 15 MPa and 200 °C, respectively. The temperature affects the fracture pressure more than the collapse pressure, but both parameters rise with the increase of temperature. The coupling of thermo-fluid-solid, greatly impacting the borehole stability, proves to be a good method to analyze similar problems of other formations.

  8. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    Science.gov (United States)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  9. High-order shock-fitted detonation propagation in high explosives

    Science.gov (United States)

    Romick, Christopher M.; Aslam, Tariq D.

    2017-03-01

    A highly accurate numerical shock and material interface fitting scheme composed of fifth-order spatial and third- or fifth-order temporal discretizations is applied to the two-dimensional reactive Euler equations in both slab and axisymmetric geometries. High rates of convergence are not typically possible with shock-capturing methods as the Taylor series analysis breaks down in the vicinity of discontinuities. Furthermore, for typical high explosive (HE) simulations, the effects of material interfaces at the charge boundary can also cause significant computational errors. Fitting a computational boundary to both the shock front and material interface (i.e. streamline) alleviates the computational errors associated with captured shocks and thus opens up the possibility of high rates of convergence for multi-dimensional shock and detonation flows. Several verification tests, including a Sedov blast wave, a Zel'dovich-von Neumann-Döring (ZND) detonation wave, and Taylor-Maccoll supersonic flow over a cone, are utilized to demonstrate high rates of convergence to nontrivial shock and reaction flows. Comparisons to previously published shock-capturing multi-dimensional detonations in a polytropic fluid with a constant adiabatic exponent (PF-CAE) are made, demonstrating significantly lower computational error for the present shock and material interface fitting method. For an error on the order of 10 m /s, which is similar to that observed in experiments, shock-fitting offers a computational savings on the order of 1000. In addition, the behavior of the detonation phase speed is examined for several slab widths to evaluate the detonation performance of PBX 9501 while utilizing the Wescott-Stewart-Davis (WSD) model, which is commonly used in HE modeling. It is found that the thickness effect curve resulting from this equation of state and reaction model using published values is dramatically more steep than observed in recent experiments. Utilizing the present fitting

  10. Constructing Two-, Zero-, and One-Dimensional Integrated Nanostructures: an Effective Strategy for High Microwave Absorption Performance.

    Science.gov (United States)

    Sun, Yuan; Xu, Jianle; Qiao, Wen; Xu, Xiaobing; Zhang, Weili; Zhang, Kaiyu; Zhang, Xing; Chen, Xing; Zhong, Wei; Du, Youwei

    2016-11-23

    A novel "201" nanostructure composite consisting of two-dimensional MoS 2 nanosheets, zero-dimensional Ni nanoparticles and one-dimensional carbon nanotubes (CNTs) was prepared successfully by a two-step method: Ni nanopaticles were deposited onto the surface of few-layer MoS 2 nanosheets by a wet chemical method, followed by chemical vapor deposition growth of CNTs through the catalysis of Ni nanoparticles. The as-prepared 201-MoS 2 -Ni-CNTs composites exhibit remarkably enhanced microwave absorption performance compared to Ni-MoS 2 or Ni-CNTs. The minimum reflection loss (RL) value of 201-MoS 2 -Ni-CNTs/wax composites with filler loading ratio of 30 wt % reached -50.08 dB at the thickness of 2.4 mm. The maximum effective microwave absorption bandwidth (RL< -10 dB) of 6.04 GHz was obtained at the thickness of 2.1 mm. The excellent absorption ability originates from appropriate impedance matching ratio, strong dielectric loss and large surface area, which are attributed to the "201" nanostructure. In addition, this method could be extended to other low-dimensional materials, proving to be an efficient and promising strategy for high microwave absorption performance.

  11. Three-dimensional modeling of capsule implosions in OMEGA tetrahedral hohlraums

    International Nuclear Information System (INIS)

    Schnittman, J. D.; Craxton, R. S.

    2000-01-01

    Tetrahedral hohlraums have been proposed as a means for achieving the highly uniform implosions needed for ignition with inertial confinement fusion (ICF) [J. D. Schnittman and R. S. Craxton, Phys. Plasmas 3, 3786 (1996)]. Recent experiments on the OMEGA laser system have achieved good drive uniformity consistent with theoretical predictions [J. M. Wallace et al., Phys. Rev. Lett. 82, 3807 (1999)]. To better understand these experiments and future investigations of high-convergence ICF implosions, the three-dimensional (3-D) view-factor code BUTTERCUP has been expanded to model the time-dependent radiation transport in the hohlraum and the hydrodynamic implosion of the capsule. Additionally, a 3-D postprocessor has been written to simulate x-ray images of the imploded core. Despite BUTTERCUP's relative simplicity, its predictions for radiation drive temperatures, fusion yields, and core deformation show close agreement with experiment. (c) 2000 American Institute of Physics

  12. Physics of low-dimensional systems

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    The physics of low-dimensional systems has developed in a remarkable way over the last decade and has accelerated over the last few years, in particular because of the discovery of the new high temperature superconductors. The new developments started more than fifteen years ago with the discovery of the unexpected quasi-one-dimensional character of the TTF-TCNQ. Since then the field of conducting quasi-one-dimensional organic system have been rapidly growing. Parallel to the experimental work there has been an important theoretical development of great conceptual importance, such as charge density waves, soliton-like excitations, fractional charges, new symmetry properties etc. A new field of fundamental importance was the discovery of the Quantum Hall Effect in 1980. This field is still expanding with new experimental and theoretical discoveries. In 1986, then, came the totally unexpected discovery of high temperature superconductivity which started an explosive development. The three areas just mentioned formed the main themes of the Symposium. They do not in any way exhaust the progress in low-dimensional physics. We should mention the recent important development with both two-dimensional and one-dimensional and even zero-dimensional structures (quantum dots). The physics of mesoscopic systems is another important area where the low dimensionality is a key feature. Because of the small format of this Symposium we could unfortunately not cover these areas

  13. Thermodynamics of noncommutative high-dimensional AdS black holes with non-Gaussian smeared matter distributions

    Energy Technology Data Exchange (ETDEWEB)

    Miao, Yan-Gang [Nankai University, School of Physics, Tianjin (China); Chinese Academy of Sciences, State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, P.O. Box 2735, Beijing (China); CERN, PH-TH Division, Geneva 23 (Switzerland); Xu, Zhen-Ming [Nankai University, School of Physics, Tianjin (China)

    2016-04-15

    Considering non-Gaussian smeared matter distributions, we investigate the thermodynamic behaviors of the noncommutative high-dimensional Schwarzschild-Tangherlini anti-de Sitter black hole, and we obtain the condition for the existence of extreme black holes. We indicate that the Gaussian smeared matter distribution, which is a special case of non-Gaussian smeared matter distributions, is not applicable for the six- and higher-dimensional black holes due to the hoop conjecture. In particular, the phase transition is analyzed in detail. Moreover, we point out that the Maxwell equal area law holds for the noncommutative black hole whose Hawking temperature is within a specific range, but fails for one whose the Hawking temperature is beyond this range. (orig.)

  14. Orsay: High-gradient experiment

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    Maintaining the tradition of its contribution to the LEP Injector Linac (LIL), Orsay's Linear Accelerator Laboratory (LAL) is carrying out an R&D programme entitled 'New accelerator physics experiments at LAL' (NEPAL). The aim is to contribute to the long-term development of high energy electron-positron linear colliders, where progress can be of short-term benefit both to conventional accelerators and to injectors in rings or free-electron lasers

  15. Moving Least Squares Method for a One-Dimensional Parabolic Inverse Problem

    Directory of Open Access Journals (Sweden)

    Baiyu Wang

    2014-01-01

    Full Text Available This paper investigates the numerical solution of a class of one-dimensional inverse parabolic problems using the moving least squares approximation; the inverse problem is the determination of an unknown source term depending on time. The collocation method is used for solving the equation; some numerical experiments are presented and discussed to illustrate the stability and high efficiency of the method.

  16. Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate

    Directory of Open Access Journals (Sweden)

    Seokhoon Kim

    2015-01-01

    Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.

  17. Electric Field Guided Assembly of One-Dimensional Nanostructures for High Performance Sensors

    Directory of Open Access Journals (Sweden)

    Wing Kam Liu

    2012-05-01

    Full Text Available Various nanowire or nanotube-based devices have been demonstrated to fulfill the anticipated future demands on sensors. To fabricate such devices, electric field-based methods have demonstrated a great potential to integrate one-dimensional nanostructures into various forms. This review paper discusses theoretical and experimental aspects of the working principles, the assembled structures, and the unique functions associated with electric field-based assembly. The challenges and opportunities of the assembly methods are addressed in conjunction with future directions toward high performance sensors.

  18. Two-dimensional nuclear magnetic resonance spectroscopy

    International Nuclear Information System (INIS)

    Bax, A.; Lerner, L.

    1986-01-01

    Great spectral simplification can be obtained by spreading the conventional one-dimensional nuclear magnetic resonance (NMR) spectrum in two independent frequency dimensions. This so-called two-dimensional NMR spectroscopy removes spectral overlap, facilitates spectral assignment, and provides a wealth of additional information. For example, conformational information related to interproton distances is available from resonance intensities in certain types of two-dimensional experiments. Another method generates 1 H NMR spectra of a preselected fragment of the molecule, suppressing resonances from other regions and greatly simplifying spectral appearance. Two-dimensional NMR spectroscopy can also be applied to the study of 13 C and 15 N, not only providing valuable connectivity information but also improving sensitivity of 13 C and 15 N detection by up to two orders of magnitude. 45 references, 10 figures

  19. Interpretation of data in the classical and three-dimensional β-autoradiography

    International Nuclear Information System (INIS)

    Rusov, V.D.; Semenov, M.Yu.; Babikova, Yu.F.

    1983-01-01

    Experimental test of theoretical model of electron-microscopic β-autoradiography is the main result of the work completing a certain stage of studies on the problems of unambiguous interpretation of autoradiograms. Native DNA molecules are used as linear sources. On the basis of experiments a method, permitting to obtain high-quality autoradiograms of linear β-sources, combined with their image, is developed. Justice of the theoretical model of autoradiography, i. e. adequacy of the restored ''actual'' location of β-sources and true geometry of their distribution on autoradiographic image, is proved on the basis of the method. Conclusion is made on real possibility of realization of not only classical (two-dimensional) but three-dimensional variant of electron-microscopic radiography

  20. Preparation of wholemount mouse intestine for high-resolution three-dimensional imaging using two-photon microscopy.

    Science.gov (United States)

    Appleton, P L; Quyn, A J; Swift, S; Näthke, I

    2009-05-01

    Visualizing overall tissue architecture in three dimensions is fundamental for validating and integrating biochemical, cell biological and visual data from less complex systems such as cultured cells. Here, we describe a method to generate high-resolution three-dimensional image data of intact mouse gut tissue. Regions of highest interest lie between 50 and 200 mum within this tissue. The quality and usefulness of three-dimensional image data of tissue with such depth is limited owing to problems associated with scattered light, photobleaching and spherical aberration. Furthermore, the highest-quality oil-immersion lenses are designed to work at a maximum distance of image at high-resolution deep within tissue. We show that manipulating the refractive index of the mounting media and decreasing sample opacity greatly improves image quality such that the limiting factor for a standard, inverted multi-photon microscope is determined by the working distance of the objective as opposed to detectable fluorescence. This method negates the need for mechanical sectioning of tissue and enables the routine generation of high-quality, quantitative image data that can significantly advance our understanding of tissue architecture and physiology.

  1. High performance top-gated ferroelectric field effect transistors based on two-dimensional ZnO nanosheets

    Science.gov (United States)

    Tian, Hongzheng; Wang, Xudong; Zhu, Yuankun; Liao, Lei; Wang, Xianying; Wang, Jianlu; Hu, Weida

    2017-01-01

    High quality ultrathin two-dimensional zinc oxide (ZnO) nanosheets (NSs) are synthesized, and the ZnO NS ferroelectric field effect transistors (FeFETs) are demonstrated based on the P(VDF-TrFE) polymer film used as the top gate insulating layer. The ZnO NSs exhibit a maximum field effect mobility of 588.9 cm2/Vs and a large transconductance of 2.5 μS due to their high crystalline quality and ultrathin two-dimensional structure. The polarization property of the P(VDF-TrFE) film is studied, and a remnant polarization of >100 μC/cm2 is achieved with a P(VDF-TrFE) thickness of 300 nm. Because of the ultrahigh remnant polarization field generated in the P(VDF-TrFE) film, the FeFETs show a large memory window of 16.9 V and a high source-drain on/off current ratio of more than 107 at zero gate voltage and a source-drain bias of 0.1 V. Furthermore, a retention time of >3000 s of the polarization state is obtained, inspiring a promising candidate for applications in data storage with non-volatile features.

  2. Latent class models for joint analysis of disease prevalence and high-dimensional semicontinuous biomarker data.

    Science.gov (United States)

    Zhang, Bo; Chen, Zhen; Albert, Paul S

    2012-01-01

    High-dimensional biomarker data are often collected in epidemiological studies when assessing the association between biomarkers and human disease is of interest. We develop a latent class modeling approach for joint analysis of high-dimensional semicontinuous biomarker data and a binary disease outcome. To model the relationship between complex biomarker expression patterns and disease risk, we use latent risk classes to link the 2 modeling components. We characterize complex biomarker-specific differences through biomarker-specific random effects, so that different biomarkers can have different baseline (low-risk) values as well as different between-class differences. The proposed approach also accommodates data features that are common in environmental toxicology and other biomarker exposure data, including a large number of biomarkers, numerous zero values, and complex mean-variance relationship in the biomarkers levels. A Monte Carlo EM (MCEM) algorithm is proposed for parameter estimation. Both the MCEM algorithm and model selection procedures are shown to work well in simulations and applications. In applying the proposed approach to an epidemiological study that examined the relationship between environmental polychlorinated biphenyl (PCB) exposure and the risk of endometriosis, we identified a highly significant overall effect of PCB concentrations on the risk of endometriosis.

  3. Reliable Exfoliation of Large-Area High-Quality Flakes of Graphene and Other Two-Dimensional Materials.

    Science.gov (United States)

    Huang, Yuan; Sutter, Eli; Shi, Norman N; Zheng, Jiabao; Yang, Tianzhong; Englund, Dirk; Gao, Hong-Jun; Sutter, Peter

    2015-11-24

    Mechanical exfoliation has been a key enabler of the exploration of the properties of two-dimensional materials, such as graphene, by providing routine access to high-quality material. The original exfoliation method, which remained largely unchanged during the past decade, provides relatively small flakes with moderate yield. Here, we report a modified approach for exfoliating thin monolayer and few-layer flakes from layered crystals. Our method introduces two process steps that enhance and homogenize the adhesion force between the outermost sheet in contact with a substrate: Prior to exfoliation, ambient adsorbates are effectively removed from the substrate by oxygen plasma cleaning, and an additional heat treatment maximizes the uniform contact area at the interface between the source crystal and the substrate. For graphene exfoliation, these simple process steps increased the yield and the area of the transferred flakes by more than 50 times compared to the established exfoliation methods. Raman and AFM characterization shows that the graphene flakes are of similar high quality as those obtained in previous reports. Graphene field-effect devices were fabricated and measured with back-gating and solution top-gating, yielding mobilities of ∼4000 and 12,000 cm(2)/(V s), respectively, and thus demonstrating excellent electrical properties. Experiments with other layered crystals, e.g., a bismuth strontium calcium copper oxide (BSCCO) superconductor, show enhancements in exfoliation yield and flake area similar to those for graphene, suggesting that our modified exfoliation method provides an effective way for producing large area, high-quality flakes of a wide range of 2D materials.

  4. Quantum correlation of high dimensional system in a dephasing environment

    Science.gov (United States)

    Ji, Yinghua; Ke, Qiang; Hu, Juju

    2018-05-01

    For a high dimensional spin-S system embedded in a dephasing environment, we theoretically analyze the time evolutions of quantum correlation and entanglement via Frobenius norm and negativity. The quantum correlation dynamics can be considered as a function of the decoherence parameters, including the ratio between the system oscillator frequency ω0 and the reservoir cutoff frequency ωc , and the different environment temperature. It is shown that the quantum correlation can not only measure nonclassical correlation of the considered system, but also perform a better robustness against the dissipation. In addition, the decoherence presents the non-Markovian features and the quantum correlation freeze phenomenon. The former is much weaker than that in the sub-Ohmic or Ohmic thermal reservoir environment.

  5. High-Dimensional Modeling for Cytometry: Building Rock Solid Models Using GemStone™ and Verity Cen-se'™ High-Definition t-SNE Mapping.

    Science.gov (United States)

    Bruce Bagwell, C

    2018-01-01

    This chapter outlines how to approach the complex tasks associated with designing models for high-dimensional cytometry data. Unlike gating approaches, modeling lends itself to automation and accounts for measurement overlap among cellular populations. Designing these models is now easier because of a new technique called high-definition t-SNE mapping. Nontrivial examples are provided that serve as a guide to create models that are consistent with data.

  6. Two-dimensional boundary-value problem for ion-ion diffusion

    International Nuclear Information System (INIS)

    Tuszewski, M.; Lichtenberg, A.J.

    1977-01-01

    Like-particle diffusion is usually negligible compared with unlike-particle diffusion because it is two orders higher in spatial derivatives. When the ratio of the ion gyroradius to the plasma transverse dimension is of the order of the fourth root of the mass ratio, previous one-dimensional analysis indicated that like-particle diffusion is significant. A two-dimensional boundary-value problem for ion-ion diffusion is investigated. Numerical solutions are found with models for which the nonlinear partial differential equation reduces to an ordinary fourth-order differential equation. These solutions indicate that the ion-ion losses are higher by a factor of six for a slab geometry, and by a factor of four for circular geometry, than estimated from dimensional analysis. The solutions are applied to a multiple mirror experiment stabilized with a quadrupole magnetic field which generates highly elliptical flux surfaces. It is found that the ion-ion losses dominate the electron-ion losses and that these classical radial losses contribute to a significant decrease of plasma lifetime, in qualitiative agreement with the experimental results

  7. Three dimensional imaging technique for laser-plasma diagnostics

    International Nuclear Information System (INIS)

    Jiang Shaoen; Zheng Zhijian; Liu Zhongli

    2001-01-01

    A CT technique for laser-plasma diagnostic and a three-dimensional (3D) image reconstruction program (CT3D) have been developed. The 3D images of the laser-plasma are reconstructed by using a multiplication algebraic reconstruction technique (MART) from five pinhole camera images obtained along different sight directions. The technique has been used to measure the three-dimensional distribution of X-ray of laser-plasma experiments in Xingguang II device, and the good results are obtained. This shows that a CT technique can be applied to ICF experiments

  8. Three dimensional imaging technique for laser-plasma diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Shaoen, Jiang; Zhijian, Zheng; Zhongli, Liu [China Academy of Engineering Physics, Chengdu (China)

    2001-04-01

    A CT technique for laser-plasma diagnostic and a three-dimensional (3D) image reconstruction program (CT3D) have been developed. The 3D images of the laser-plasma are reconstructed by using a multiplication algebraic reconstruction technique (MART) from five pinhole camera images obtained along different sight directions. The technique has been used to measure the three-dimensional distribution of X-ray of laser-plasma experiments in Xingguang II device, and the good results are obtained. This shows that a CT technique can be applied to ICF experiments.

  9. Space experiments with high stability clocks

    International Nuclear Information System (INIS)

    Vessot, R.F.C.

    1993-01-01

    Modern metrology depends increasingly on the accuracy and frequency stability of atomic clocks. Applications of such high-stability oscillators (or clocks) to experiments performed in space are described and estimates of the precision of these experiments are made in terms of clock performance. Methods using time-correlation to cancel localized disturbances in very long signal paths and a proposed space borne four station VLBI system are described. (TEC). 30 refs., 14 figs., 1 tab

  10. A subspace approach to high-resolution spectroscopic imaging.

    Science.gov (United States)

    Lam, Fan; Liang, Zhi-Pei

    2014-04-01

    To accelerate spectroscopic imaging using sparse sampling of (k,t)-space and subspace (or low-rank) modeling to enable high-resolution metabolic imaging with good signal-to-noise ratio. The proposed method, called SPectroscopic Imaging by exploiting spatiospectral CorrElation, exploits a unique property known as partial separability of spectroscopic signals. This property indicates that high-dimensional spectroscopic signals reside in a very low-dimensional subspace and enables special data acquisition and image reconstruction strategies to be used to obtain high-resolution spatiospectral distributions with good signal-to-noise ratio. More specifically, a hybrid chemical shift imaging/echo-planar spectroscopic imaging pulse sequence is proposed for sparse sampling of (k,t)-space, and a low-rank model-based algorithm is proposed for subspace estimation and image reconstruction from sparse data with the capability to incorporate prior information and field inhomogeneity correction. The performance of the proposed method has been evaluated using both computer simulations and phantom studies, which produced very encouraging results. For two-dimensional spectroscopic imaging experiments on a metabolite phantom, a factor of 10 acceleration was achieved with a minimal loss in signal-to-noise ratio compared to the long chemical shift imaging experiments and with a significant gain in signal-to-noise ratio compared to the accelerated echo-planar spectroscopic imaging experiments. The proposed method, SPectroscopic Imaging by exploiting spatiospectral CorrElation, is able to significantly accelerate spectroscopic imaging experiments, making high-resolution metabolic imaging possible. Copyright © 2014 Wiley Periodicals, Inc.

  11. Collective excitations and superconductivity in reduced dimensional systems - Possible mechanism for high Tc

    International Nuclear Information System (INIS)

    Santoyo, B.M.

    1989-01-01

    The author studies in full detail a possible mechanism of superconductivity in slender electronic systems of finite cross section. This mechanism is based on the pairing interaction mediated by the multiple modes of acoustic plasmons in these structures. First, he shows that multiple non-Landau-damped acoustic plasmon modes exist for electrons in a quasi-one dimensional wire at finite temperatures. These plasmons are of two basic types. The first one is made up by the collective longitudinal oscillations of the electrons essentially of a given transverse energy level oscillating against the electrons in the neighboring transverse energy level. The modes are called Slender Acoustic Plasmons or SAP's. The other mode is the quasi-one dimensional acoustic plasmon mode in which all the electrons oscillate together in phase among themselves but out of phase against the positive ion background. He shows numerically and argues physically that even for a temperature comparable to the mode separation Δω the SAP's and the quasi-one dimensional plasmon persist. Then, based on a clear physical picture, he develops in terms of the dielectric function a theory of superconductivity capable of treating the simultaneous participation of multiple bosonic modes that mediate the pairing interaction. The effect of mode damping is then incorporated in a simple manner that is free of the encumbrance of the strong-coupling, Green's function formalism usually required for the retardation effect. Explicit formulae including such damping are derived for the critical temperature T c and the energy gap Δ 0 . With those modes and armed with such a formalism, he proceeds to investigate a possible superconducting mechanism for high T c in quasi-one dimensional single-wire and multi-wire systems

  12. A high-speed computerized tomography image reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1983-01-01

    The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)

  13. Recent DIII-D high power heating and current drive experiments

    International Nuclear Information System (INIS)

    Simonen, T.C.; Jackson, G.L.; Mahdavi, M.A.; Petrie, T.W.; Politzer, P.A.; Taylor, T.S.; Lazarus, E.A.

    1994-02-01

    This paper describes recent DIII-D high power heating and current drive experiments. Describes are experiments with improved wall conditioning, divertor particle pumping, radiative divertor experiments, studies of plasma shape and high poloidal beta

  14. Dynamic colloidal assembly pathways via low dimensional models

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yuguang; Bevan, Michael A., E-mail: mabevan@jhu.edu [Chemical and Biomolecular Engineering, Johns Hopkins University, Baltimore, Maryland 21218 (United States); Thyagarajan, Raghuram; Ford, David M. [Chemical Engineering, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

    2016-05-28

    Here we construct a low-dimensional Smoluchowski model for electric field mediated colloidal crystallization using Brownian dynamic simulations, which were previously matched to experiments. Diffusion mapping is used to infer dimensionality and confirm the use of two order parameters, one for degree of condensation and one for global crystallinity. Free energy and diffusivity landscapes are obtained as the coefficients of a low-dimensional Smoluchowski equation to capture the thermodynamics and kinetics of microstructure evolution. The resulting low-dimensional model quantitatively captures the dynamics of different assembly pathways between fluid, polycrystal, and single crystals states, in agreement with the full N-dimensional data as characterized by first passage time distributions. Numerical solution of the low-dimensional Smoluchowski equation reveals statistical properties of the dynamic evolution of states vs. applied field amplitude and system size. The low-dimensional Smoluchowski equation and associated landscapes calculated here can serve as models for predictive control of electric field mediated assembly of colloidal ensembles into two-dimensional crystalline objects.

  15. TSOM Method for Nanoelectronics Dimensional Metrology

    International Nuclear Information System (INIS)

    Attota, Ravikiran

    2011-01-01

    Through-focus scanning optical microscopy (TSOM) is a relatively new method that transforms conventional optical microscopes into truly three-dimensional metrology tools for nanoscale to microscale dimensional analysis. TSOM achieves this by acquiring and analyzing a set of optical images collected at various focus positions going through focus (from above-focus to under-focus). The measurement resolution is comparable to what is possible with typical light scatterometry, scanning electron microscopy (SEM) and atomic force microscopy (AFM). TSOM method is able to identify nanometer scale difference, type of the difference and magnitude of the difference between two nano/micro scale targets using a conventional optical microscope with visible wavelength illumination. Numerous industries could benefit from the TSOM method--such as the semiconductor industry, MEMS, NEMS, biotechnology, nanomanufacturing, data storage, and photonics. The method is relatively simple and inexpensive, has a high throughput, provides nanoscale sensitivity for 3D measurements and could enable significant savings and yield improvements in nanometrology and nanomanufacturing. Potential applications are demonstrated using experiments and simulations.

  16. Recent DIII-D high power heating and current drive experiments

    International Nuclear Information System (INIS)

    Simonen, T.C.; Jackson, G.L.; Lazarus, E.A.; Mahdavi, M.A.; Petrie, T.W.; Politzer, P.A.; Taylor, T.S.

    1995-01-01

    This paper describes recent DIII-D high power heating and current drive experiments. Described are experiments with improved wall conditioning, divertor particle pumping, radiative divertor experiments, studies of plasma shape and high poloidal β. ((orig.))

  17. Recent DIII-D high power heating and current drive experiments

    Energy Technology Data Exchange (ETDEWEB)

    Simonen, T.C. [General Atomics, San Diego, CA (United States); Jackson, G.L. [General Atomics, San Diego, CA (United States); Lazarus, E.A. [Oak Ridge National Lab., TN (United States); Mahdavi, M.A. [General Atomics, San Diego, CA (United States); Petrie, T.W. [General Atomics, San Diego, CA (United States); Politzer, P.A. [General Atomics, San Diego, CA (United States); Taylor, T.S. [General Atomics, San Diego, CA (United States); DIII-D Team

    1995-01-01

    This paper describes recent DIII-D high power heating and current drive experiments. Described are experiments with improved wall conditioning, divertor particle pumping, radiative divertor experiments, studies of plasma shape and high poloidal {beta}. ((orig.)).

  18. High beta experiments in CHS

    International Nuclear Information System (INIS)

    Okamura, S.; Matsuoka, K.; Nishimura, K.

    1994-09-01

    High beta experiments were performed in the low-aspect-ratio helical device CHS with the volume-averaged equilibrium beta up to 2.1 %. These values (highest for helical systems) are obtained for high density plasmas in low magnetic field heated with two tangential neutral beams. Confinement improvement given by means of turning off gas puffing helped significantly to make high betas. Magnetic fluctuations increased with increasing beta, but finally stopped to increase in the beta range > 1 %. The coherent modes appearing in the magnetic hill region showed strong dependence on the beta values. The dynamic poloidal field control was applied to suppress the outward plasma movement with the plasma pressure. Such an operation gave fixed boundary operations of high beta plasmas in helical systems. (author)

  19. Discovering highly informative feature set over high dimensions

    KAUST Repository

    Zhang, Chongsheng; Masseglia, Florent; Zhang, Xiangliang

    2012-01-01

    For many textual collections, the number of features is often overly large. These features can be very redundant, it is therefore desirable to have a small, succinct, yet highly informative collection of features that describes the key characteristics of a dataset. Information theory is one such tool for us to obtain this feature collection. With this paper, we mainly contribute to the improvement of efficiency for the process of selecting the most informative feature set over high-dimensional unlabeled data. We propose a heuristic theory for informative feature set selection from high dimensional data. Moreover, we design data structures that enable us to compute the entropies of the candidate feature sets efficiently. We also develop a simple pruning strategy that eliminates the hopeless candidates at each forward selection step. We test our method through experiments on real-world data sets, showing that our proposal is very efficient. © 2012 IEEE.

  20. Discovering highly informative feature set over high dimensions

    KAUST Repository

    Zhang, Chongsheng

    2012-11-01

    For many textual collections, the number of features is often overly large. These features can be very redundant, it is therefore desirable to have a small, succinct, yet highly informative collection of features that describes the key characteristics of a dataset. Information theory is one such tool for us to obtain this feature collection. With this paper, we mainly contribute to the improvement of efficiency for the process of selecting the most informative feature set over high-dimensional unlabeled data. We propose a heuristic theory for informative feature set selection from high dimensional data. Moreover, we design data structures that enable us to compute the entropies of the candidate feature sets efficiently. We also develop a simple pruning strategy that eliminates the hopeless candidates at each forward selection step. We test our method through experiments on real-world data sets, showing that our proposal is very efficient. © 2012 IEEE.

  1. Three-dimensional bicontinuous nanoporous Au/polyaniline hybrid films for high-performance electrochemical supercapacitors

    Science.gov (United States)

    Lang, Xingyou; Zhang, Ling; Fujita, Takeshi; Ding, Yi; Chen, Mingwei

    2012-01-01

    We report three-dimensional bicontinuous nanoporous Au/polyaniline (PANI) composite films made by one-step electrochemical polymerization of PANI shell onto dealloyed nanoporous gold (NPG) skeletons for the applications in electrochemical supercapacitors. The NPG/PANI based supercapacitors exhibit ultrahigh volumetric capacitance (∼1500 F cm-3) and energy density (∼0.078 Wh cm-3), which are seven and four orders of magnitude higher than these of electrolytic capacitors, with the same power density up to ∼190 W cm-3. The outstanding capacitive performances result from a novel nanoarchitecture in which pseudocapacitive PANI shells are incorporated into pore channels of highly conductive NPG, making them promising candidates as electrode materials in supercapacitor devices combing high-energy storage densities with high-power delivery.

  2. Compilation of current high-energy-physics experiments

    International Nuclear Information System (INIS)

    Wohl, C.G.; Kelly, R.L.; Armstrong, F.E.

    1980-04-01

    This is the third edition of a compilation of current high energy physics experiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and ten participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physics experiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about January 1980, and (2) had not completed taking of data by 1 January 1976

  3. Three-dimensional porous hollow fibre copper electrodes for efficient and high-rate electrochemical carbon dioxide reduction

    NARCIS (Netherlands)

    Kas, Recep; Hummadi, Khalid Khazzal; Kortlever, Ruud; de Wit, Patrick; Milbrat, Alexander; Luiten-Olieman, Maria W.J.; Benes, Nieck Edwin; Koper, Marc T.M.; Mul, Guido

    2016-01-01

    Aqueous-phase electrochemical reduction of carbon dioxide requires an active, earth-abundant electrocatalyst, as well as highly efficient mass transport. Here we report the design of a porous hollow fibre copper electrode with a compact three-dimensional geometry, which provides a large area,

  4. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    Science.gov (United States)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  5. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin.

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-02-02

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin.

  6. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-01-01

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868

  7. Design and modeling of precision solid liner experiments on Pegasus

    International Nuclear Information System (INIS)

    Bowers, R.L.; Brownell, J.H.; Lee, H.; McLenithan, K.D.; Scannapieco, A.J.; Shanahan, W.R.

    1998-01-01

    Pulsed power driven solid liners may be used for a variety of physics experiments involving materials at high stresses. These include shock formation and propagation, material strain-rate effects, material melt, instability growth, and ejecta from shocked surfaces. We describe the design and performance of a cylindrical solid liner that can attain velocities in the several mm/μs regime, and that can be used to drive high-stress experiments. An approximate theoretical analysis of solid liner implosions is used to establish the basic parameters (mass, materials, and initial radius) of the driver. We then present one-dimensional and two-dimensional simulations of magnetically driven, liner implosions which include resistive heating and elastic endash plastic behavior. The two-dimensional models are used to study the effects of electrode glide planes on the liner close-quote s performance, to examine sources of perturbations of the liner, and to assess possible effects of instability growth during the implosion. Finally, simulations are compared with experimental data to show that the solid liner performed as predicted computationally. Experimental data indicate that the liner imploded from an initial radius of 2.4 cm to a target radius of 1.5 cm, and that it was concentric and cylindrical to better than the experimental resolution (60 μm) at the target. The results demonstrate that a precision solid liner can be produced for high-stress, pulsed power applications experiments. copyright 1998 American Institute of Physics

  8. Solar array experiments on the SPHINX satellite. [Space Plasma High voltage INteraction eXperiment satellite

    Science.gov (United States)

    Stevens, N. J.

    1974-01-01

    The Space Plasma, High Voltage Interaction Experiment (SPHINX) is the name given to an auxiliary payload satellite scheduled to be launched in January 1974. The principal experiments carried on this satellite are specifically designed to obtain the engineering data on the interaction of high voltage systems with the space plasma. The classes of experiments are solar array segments, insulators, insulators with pin holes and conductors. The satellite is also carrying experiments to obtain flight data on three new solar array configurations: the edge illuminated-multijunction cells, the teflon encased cells, and the violet cells.

  9. Enhanced 29Si spin-lattice relaxation and observation of three-dimensional lattice connectivity in zeolites by two-dimensional 29Si MASS NMR

    International Nuclear Information System (INIS)

    Sivadinarayana, C.; Choudhary, V.R.; Ganapathy, S.

    1994-01-01

    It is shown that considerable sensitivity enhancement is achieved in the 29 Si magic angle sample spinning (MASS) NMR spectra of highly siliceous zeolites by pre treating the material with oxygen. The presence of adsorbed molecular oxygen in zeolite channels promotes an efficient 29 Si spin-lattice relaxation via a paramagnetic interaction between the lattice 29 Si T-site and the adsorbed oxygen on zeolite channels. This affords an efficient 2-D data collection and leads to increased sensitivity. The utility of this method is demonstrated in a two-dimensional COSY-45 NMR experiment of a high silica zeolite ZSM-5. (author). 20 refs., 3 figs., 1 tab

  10. High gain requirements and high field Tokamak experiments

    International Nuclear Information System (INIS)

    Cohn, D.R.

    1994-01-01

    Operation at sufficiently high gain (ratio of fusion power to external heating power) is a fundamental requirement for tokamak power reactors. For typical reactor concepts, the gain is greater than 25. Self-heating from alpha particles in deuterium-tritium plasmas can greatly reduce ητ/temperature requirements for high gain. A range of high gain operating conditions is possible with different values of alpha-particle efficiency (fraction of alpha-particle power that actually heats the plasma) and with different ratios of self heating to external heating. At one extreme, there is ignited operation, where all of the required plasma heating is provided by alpha particles and the alpha-particle efficiency is 100%. At the other extreme, there is the case of no heating contribution from alpha particles. ητ/temperature requirements for high gain are determined as a function of alpha-particle heating efficiency. Possibilities for high gain experiments in deuterium-tritium, deuterium, and hydrogen plasmas are discussed

  11. High-resolution computer-generated reflection holograms with three-dimensional effects written directly on a silicon surface by a femtosecond laser.

    Science.gov (United States)

    Wædegaard, Kristian J; Balling, Peter

    2011-02-14

    An infrared femtosecond laser has been used to write computer-generated holograms directly on a silicon surface. The high resolution offered by short-pulse laser ablation is employed to write highly detailed holograms with resolution up to 111 kpixels/mm2. It is demonstrated how three-dimensional effects can be realized in computer-generated holograms. Three-dimensional effects are visualized as a relative motion between different parts of the holographic reconstruction, when the hologram is moved relative to the reconstructing laser beam. Potential security applications are briefly discussed.

  12. Nano-engineering of three-dimensional core/shell nanotube arrays for high performance supercapacitors

    Science.gov (United States)

    Grote, Fabian; Wen, Liaoyong; Lei, Yong

    2014-06-01

    Large-scale arrays of core/shell nanostructures are highly desirable to enhance the performance of supercapacitors. Here we demonstrate an innovative template-based fabrication technique with high structural controllability, which is capable of synthesizing well-ordered three-dimensional arrays of SnO2/MnO2 core/shell nanotubes for electrochemical energy storage in supercapacitor applications. The SnO2 core is fabricated by atomic layer deposition and provides a highly electrical conductive matrix. Subsequently a thin MnO2 shell is coated by electrochemical deposition onto the SnO2 core, which guarantees a short ion diffusion length within the shell. The core/shell structure shows an excellent electrochemical performance with a high specific capacitance of 910 F g-1 at 1 A g-1 and a good rate capability of remaining 217 F g-1 at 50 A g-1. These results shall pave the way to realize aqueous based asymmetric supercapacitors with high specific power and high specific energy.

  13. Modeling a ponded infiltration experiment at Yucca Mountain, NV

    International Nuclear Information System (INIS)

    Hudson, D.B.; Guertal, W.R.; Flint, A.L.

    1994-01-01

    Yucca Mountain, Nevada is being evaluated as a potential site for a geologic repository for high level radioactive waste. As part of the site characterization activities at Yucca Mountain, a field-scale ponded infiltration experiment was done to help characterize the hydraulic and infiltration properties of a layered dessert alluvium deposit. Calcium carbonate accumulation and cementation, heterogeneous layered profiles, high evapotranspiration, low precipitation, and rocky soil make the surface difficult to characterize.The effects of the strong morphological horizonation on the infiltration processes, the suitability of measured hydraulic properties, and the usefulness of ponded infiltration experiments in site characterization work were of interest. One-dimensional and two-dimensional radial flow numerical models were used to help interpret the results of the ponding experiment. The objective of this study was to evaluate the results of a ponded infiltration experiment done around borehole UE25 UZN number-sign 85 (N85) at Yucca Mountain, NV. The effects of morphological horizons on the infiltration processes, lateral flow, and measured soil hydaulic properties were studied. The evaluation was done by numerically modeling the results of a field ponded infiltration experiment. A comparison the experimental results and the modeled results was used to qualitatively indicate the degree to which infiltration processes and the hydaulic properties are understood. Results of the field characterization, soil characterization, borehole geophysics, and the ponding experiment are presented in a companion paper

  14. High-pressure high-temperature experiments: Windows to the Universe

    International Nuclear Information System (INIS)

    Santaria-Perez, D.

    2011-01-01

    From Earth compositional arguments suggested by indirect methods, such as the propagation of seismic waves, is possible to generate in the laboratory pressure and temperature conditions similar to those of the Earth or other planet interiors and to study how these conditions affect to a certain metal or mineral. These experiments are, therefore, windows to the Universe. The aim of this chapter is to illustrate the huge power of the experimental high-pressure high-temperature techniques and give a global overview of their application to different geophysical fields. Finally, we will introduce the MALTA Consolider Team, which gather most of the Spanish high-pressure community, and present their available high-pressure facilities. (Author) 28 refs.

  15. Impact of high-frequency pumping on anomalous finite-size effects in three-dimensional topological insulators

    Science.gov (United States)

    Pervishko, Anastasiia A.; Yudin, Dmitry; Shelykh, Ivan A.

    2018-02-01

    Lowering of the thickness of a thin-film three-dimensional topological insulator down to a few nanometers results in the gap opening in the spectrum of topologically protected two-dimensional surface states. This phenomenon, which is referred to as the anomalous finite-size effect, originates from hybridization between the states propagating along the opposite boundaries. In this work, we consider a bismuth-based topological insulator and show how the coupling to an intense high-frequency linearly polarized pumping can further be used to manipulate the value of a gap. We address this effect within recently proposed Brillouin-Wigner perturbation theory that allows us to map a time-dependent problem into a stationary one. Our analysis reveals that both the gap and the components of the group velocity of the surface states can be tuned in a controllable fashion by adjusting the intensity of the driving field within an experimentally accessible range and demonstrate the effect of light-induced band inversion in the spectrum of the surface states for high enough values of the pump.

  16. Growing three-dimensional biomorphic graphene powders using naturally abundant diatomite templates towards high solution processability.

    Science.gov (United States)

    Chen, Ke; Li, Cong; Shi, Liurong; Gao, Teng; Song, Xiuju; Bachmatiuk, Alicja; Zou, Zhiyu; Deng, Bing; Ji, Qingqing; Ma, Donglin; Peng, Hailin; Du, Zuliang; Rümmeli, Mark Hermann; Zhang, Yanfeng; Liu, Zhongfan

    2016-11-07

    Mass production of high-quality graphene with low cost is the footstone for its widespread practical applications. We present herein a self-limited growth approach for producing graphene powders by a small-methane-flow chemical vapour deposition process on naturally abundant and industrially widely used diatomite (biosilica) substrates. Distinct from the chemically exfoliated graphene, thus-produced biomorphic graphene is highly crystallized with atomic layer-thickness controllability, structural designability and less noncarbon impurities. In particular, the individual graphene microarchitectures preserve a three-dimensional naturally curved surface morphology of original diatom frustules, effectively overcoming the interlayer stacking and hence giving excellent dispersion performance in fabricating solution-processible electrodes. The graphene films derived from as-made graphene powders, compatible with either rod-coating, or inkjet and roll-to-roll printing techniques, exhibit much higher electrical conductivity (∼110,700 S m -1 at 80% transmittance) than previously reported solution-based counterparts. This work thus puts forward a practical route for low-cost mass production of various powdery two-dimensional materials.

  17. High performance supercapacitors based on three-dimensional ultralight flexible manganese oxide nanosheets/carbon foam composites

    Science.gov (United States)

    He, Shuijian; Chen, Wei

    2014-09-01

    The syntheses and capacitance performances of ultralight and flexible MnO2/carbon foam (MnO2/CF) hybrids are systematically studied. Flexible carbon foam with a low mass density of 6.2 mg cm-3 and high porosity of 99.66% is simply obtained by carbonization of commercially available and low-cost melamine resin foam. With the high porous carbon foam as framework, ultrathin MnO2 nanosheets are grown through in situ redox reaction between KMnO4 and carbon foam. The three-dimensional (3D) MnO2/CF networks exhibit highly ordered hierarchical pore structure. Attributed to the good flexibility and ultralight weight, the MnO2/CF nanomaterials can be directly fabricated into supercapacitor electrodes without any binder and conductive agents. Moreover, the pseudocapacitance of the MnO2 nanosheets is enhanced by the fast ion diffusion in the three-dimensional porous architecture and by the conductive carbon foam skeleton as well as good contact of carbon/oxide interfaces. Supercapacitor based on the MnO2/CF composite with 3.4% weight percent of MnO2 shows a high specific capacitance of 1270.5 F g-1 (92.7% of the theoretical specific capacitance of MnO2) and high energy density of 86.2 Wh kg-1. The excellent capacitance performance of the present 3D ultralight and flexible nanomaterials make them promising candidates as electrode materials for supercapacitors.

  18. A three-dimensional stratigraphic model for aggrading submarine channels based on laboratory experiments, numerical modeling, and sediment cores

    Science.gov (United States)

    Limaye, A. B.; Komatsu, Y.; Suzuki, K.; Paola, C.

    2017-12-01

    Turbidity currents deliver clastic sediment from continental margins to the deep ocean, and are the main driver of landscape and stratigraphic evolution in many low-relief, submarine environments. The sedimentary architecture of turbidites—including the spatial organization of coarse and fine sediments—is closely related to the aggradation, scour, and lateral shifting of channels. Seismic stratigraphy indicates that submarine, meandering channels often aggrade rapidly relative to lateral shifting, and develop channel sand bodies with high vertical connectivity. In comparison, the stratigraphic architecture developed by submarine, braided is relatively uncertain. We present a new stratigraphic model for submarine braided channels that integrates predictions from laboratory experiments and flow modeling with constraints from sediment cores. In the laboratory experiments, a saline density current developed subaqueous channels in plastic sediment. The channels aggraded to form a deposit with a vertical scale of approximately five channel depths. We collected topography data during aggradation to (1) establish relative stratigraphic age, and (2) estimate the sorting patterns of a hypothetical grain size distribution. We applied a numerical flow model to each topographic surface and used modeled flow depth as a proxy for relative grain size. We then conditioned the resulting stratigraphic model to observed grain size distributions using sediment core data from the Nankai Trough, offshore Japan. Using this stratigraphic model, we establish new, quantitative predictions for the two- and three-dimensional connectivity of coarse sediment as a function of fine-sediment fraction. Using this case study as an example, we will highlight outstanding challenges in relating the evolution of low-relief landscapes to the stratigraphic record.

  19. One-dimensional versus two-dimensional electronic states in vicinal surfaces

    International Nuclear Information System (INIS)

    Ortega, J E; Ruiz-Oses, M; Cordon, J; Mugarza, A; Kuntze, J; Schiller, F

    2005-01-01

    Vicinal surfaces with periodic arrays of steps are among the simplest lateral nanostructures. In particular, noble metal surfaces vicinal to the (1 1 1) plane are excellent test systems to explore the basic electronic properties in one-dimensional superlattices by means of angular photoemission. These surfaces are characterized by strong emissions from free-electron-like surface states that scatter at step edges. Thereby, the two-dimensional surface state displays superlattice band folding and, depending on the step lattice constant d, it splits into one-dimensional quantum well levels. Here we use high-resolution, angle-resolved photoemission to analyse surface states in a variety of samples, in trying to illustrate the changes in surface state bands as a function of d

  20. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.