WorldWideScience

Sample records for high dimensional similarity

  1. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  2. Similarity-dissimilarity plot for visualization of high dimensional data in biomedical pattern classification.

    Science.gov (United States)

    Arif, Muhammad

    2012-06-01

    In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.

  3. Clustering high dimensional data

    DEFF Research Database (Denmark)

    Assent, Ira

    2012-01-01

    High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...

  4. Mining High-Dimensional Data

    Science.gov (United States)

    Wang, Wei; Yang, Jiong

    With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.

  5. Dimensional analysis, similarity, analogy, and the simulation theory

    International Nuclear Information System (INIS)

    Davis, A.A.

    1978-01-01

    Dimensional analysis, similarity, analogy, and cybernetics are shown to be four consecutive steps in application of the simulation theory. This paper introduces the classes of phenomena which follow the same formal mathematical equations as models of the natural laws and the interior sphere of restraints groups of phenomena in which one can introduce simplfied nondimensional mathematical equations. The simulation by similarity in a specific field of physics, by analogy in two or more different fields of physics, and by cybernetics in nature in two or more fields of mathematics, physics, biology, economics, politics, sociology, etc., appears as a unique theory which permits one to transport the results of experiments from the models, convenably selected to meet the conditions of researches, constructions, and measurements in the laboratories to the originals which are the primary objectives of the researches. Some interesting conclusions which cannot be avoided in the use of simplified nondimensional mathematical equations as models of natural laws are presented. Interesting limitations on the use of simulation theory based on assumed simplifications are recognized. This paper shows as necessary, in scientific research, that one write mathematical models of general laws which will be applied to nature in its entirety. The paper proposes the extent of the second law of thermodynamics as the generalized law of entropy to model life and its activities. This paper shows that the physical studies and philosophical interpretations of phenomena and natural laws cannot be separated in scientific work; they are interconnected and one cannot be put above the others

  6. High dimensional entanglement

    CSIR Research Space (South Africa)

    Mc

    2012-07-01

    Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...

  7. hdm: High-dimensional metrics

    OpenAIRE

    Chernozhukov, Victor; Hansen, Christian; Spindler, Martin

    2016-01-01

    In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...

  8. Generalized similarity method in unsteady two-dimensional MHD ...

    African Journals Online (AJOL)

    user

    International Journal of Engineering, Science and Technology. Vol. 1, No. 1, 2009 ... temperature two-dimensional MHD laminar boundary layer of incompressible fluid. ...... Φ η is Blasius solution for stationary boundary layer on the plate,. ( ). 0.

  9. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  10. Dimensionality Reduction of Hyperspectral Image with Graph-Based Discriminant Analysis Considering Spectral Similarity

    Directory of Open Access Journals (Sweden)

    Fubiao Feng

    2017-03-01

    Full Text Available Recently, graph embedding has drawn great attention for dimensionality reduction in hyperspectral imagery. For example, locality preserving projection (LPP utilizes typical Euclidean distance in a heat kernel to create an affinity matrix and projects the high-dimensional data into a lower-dimensional space. However, the Euclidean distance is not sufficiently correlated with intrinsic spectral variation of a material, which may result in inappropriate graph representation. In this work, a graph-based discriminant analysis with spectral similarity (denoted as GDA-SS measurement is proposed, which fully considers curves changing description among spectral bands. Experimental results based on real hyperspectral images demonstrate that the proposed method is superior to traditional methods, such as supervised LPP, and the state-of-the-art sparse graph-based discriminant analysis (SGDA.

  11. One dimensional beam. Asymptotic and self similar solutions

    International Nuclear Information System (INIS)

    Feix, M.R.; Duranceau, J.L.; Besnard, D.

    1982-06-01

    Rescaling transformations provide a useful tool to solve nonlinear problems described by partial derivative equations. A brief review of this method is presented together with the connection with the self similar solutions obtained by compacting the independent variable with one of them (the time). The general theory is reported through examples found in Plasma Physics with a careful distinction between systems described by Hamiltonian and others where irreversible phenomena, like diffusion, are taken into account

  12. Faithful representation of similarities among three-dimensional shapes in human vision.

    Science.gov (United States)

    Cutzu, F; Edelman, S

    1996-01-01

    Efficient and reliable classification of visual stimuli requires that their representations reside a low-dimensional and, therefore, computationally manageable feature space. We investigated the ability of the human visual system to derive such representations from the sensory input-a highly nontrivial task, given the million or so dimensions of the visual signal at its entry point to the cortex. In a series of experiments, subjects were presented with sets of parametrically defined shapes; the points in the common high-dimensional parameter space corresponding to the individual shapes formed regular planar (two-dimensional) patterns such as a triangle, a square, etc. We then used multidimensional scaling to arrange the shapes in planar configurations, dictated by their experimentally determined perceived similarities. The resulting configurations closely resembled the original arrangements of the stimuli in the parameter space. This achievement of the human visual system was replicated by a computational model derived from a theory of object representation in the brain, according to which similarities between objects, and not the geometry of each object, need to be faithfully represented. Images Fig. 3 PMID:8876260

  13. Dimensionally similar discharges with central rf heating on the DIII-D tokamak

    International Nuclear Information System (INIS)

    Petty, C.C.; Luce, T.C.; Pinsker, R.I.

    1993-04-01

    The scaling of L-mode heat transport with normalized gyroradius is investigated on the DIII-D tokamak using central rf heating. A toroidal field scan of dimensionally similar discharges with central ECH and/or fast wave heating show gyro-Bohm-like scaling both globally and locally. The main difference between these restats and those using NBI heating on DIII-D is that with rf heating the deposition profile is not very sensitive to the plasma density. Therefore central heating can be utilized for both the low-B and high-B discharges, whereas for NBI the power deposition is decidedly off-axis for the high-B discharge (i.e., high density)

  14. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  15. Dimensional analysis and self-similarity methods for engineers and scientists

    CERN Document Server

    Zohuri, Bahman

    2015-01-01

    This ground-breaking reference provides an overview of key concepts in dimensional analysis, and then pushes well beyond traditional applications in fluid mechanics to demonstrate how powerful this tool can be in solving complex problems across many diverse fields. Of particular interest is the book's coverage of  dimensional analysis and self-similarity methods in nuclear and energy engineering. Numerous practical examples of dimensional problems are presented throughout, allowing readers to link the book's theoretical explanations and step-by-step mathematical solutions to practical impleme

  16. Collapsing perfect fluid in self-similar five dimensional space-time and cosmic censorship

    International Nuclear Information System (INIS)

    Ghosh, S.G.; Sarwe, S.B.; Saraykar, R.V.

    2002-01-01

    We investigate the occurrence and nature of naked singularities in the gravitational collapse of a self-similar adiabatic perfect fluid in a five dimensional space-time. The naked singularities are found to be gravitationally strong in the sense of Tipler and thus violate the cosmic censorship conjecture

  17. High-Dimensional Metrics in R

    OpenAIRE

    Chernozhukov, Victor; Hansen, Chris; Spindler, Martin

    2016-01-01

    The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...

  18. Similarity and self-similarity in high energy density physics: application to laboratory astrophysics

    International Nuclear Information System (INIS)

    Falize, E.

    2008-10-01

    The spectacular recent development of powerful facilities allows the astrophysical community to explore, in laboratory, astrophysical phenomena where radiation and matter are strongly coupled. The titles of the nine chapters of the thesis are: from high energy density physics to laboratory astrophysics; Lie groups, invariance and self-similarity; scaling laws and similarity properties in High-Energy-Density physics; the Burgan-Feix-Munier transformation; dynamics of polytropic gases; stationary radiating shocks and the POLAR project; structure, dynamics and stability of optically thin fluids; from young star jets to laboratory jets; modelling and experiences for laboratory jets

  19. Density-based retrieval from high-similarity image databases

    DEFF Research Database (Denmark)

    Hansen, Michael Edberg; Carstensen, Jens Michael

    2004-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce a me...

  20. Efficient estimation for high similarities using odd sketches

    DEFF Research Database (Denmark)

    Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang

    2014-01-01

    . This means that Odd Sketches provide a highly space-efficient estimator for sets of high similarity, which is relevant in applications such as web duplicate detection, collaborative filtering, and association rule learning. The method extends to weighted Jaccard similarity, relevant e.g. for TF-IDF vector...... and web duplicate detection tasks....

  1. Scaling Relations and Self-Similarity of 3-Dimensional Reynolds-Averaged Navier-Stokes Equations.

    Science.gov (United States)

    Ercan, Ali; Kavvas, M Levent

    2017-07-25

    Scaling conditions to achieve self-similar solutions of 3-Dimensional (3D) Reynolds-Averaged Navier-Stokes Equations, as an initial and boundary value problem, are obtained by utilizing Lie Group of Point Scaling Transformations. By means of an open-source Navier-Stokes solver and the derived self-similarity conditions, we demonstrated self-similarity within the time variation of flow dynamics for a rigid-lid cavity problem under both up-scaled and down-scaled domains. The strength of the proposed approach lies in its ability to consider the underlying flow dynamics through not only from the governing equations under consideration but also from the initial and boundary conditions, hence allowing to obtain perfect self-similarity in different time and space scales. The proposed methodology can be a valuable tool in obtaining self-similar flow dynamics under preferred level of detail, which can be represented by initial and boundary value problems under specific assumptions.

  2. Violation of self-similarity in the expansion of a one-dimensional Bose gas

    International Nuclear Information System (INIS)

    Pedri, P.; Santos, L.; Oehberg, P.; Stringari, S.

    2003-01-01

    The expansion of a one-dimensional Bose gas after releasing its initial harmonic confinement is investigated employing the Lieb-Liniger equation of state within the local-density approximation. We show that during the expansion the density profile of the gas does not follow a self-similar solution, as one would expect from a simple scaling ansatz. We carry out a variational calculation, which recovers the numerical results for the expansion, the equilibrium properties of the density profile, and the frequency of the lowest compressional mode. The variational approach allows for the analysis of the expansion in all interaction regimes between the mean-field and the Tonks-Girardeau limits, and in particular shows the range of parameters for which the expansion violates self-similarity

  3. Spectral analysis of multi-dimensional self-similar Markov processes

    International Nuclear Information System (INIS)

    Modarresi, N; Rezakhah, S

    2010-01-01

    In this paper we consider a discrete scale invariant (DSI) process {X(t), t in R + } with scale l > 1. We consider a fixed number of observations in every scale, say T, and acquire our samples at discrete points α k , k in W, where α is obtained by the equality l = α T and W = {0, 1, ...}. We thus provide a discrete time scale invariant (DT-SI) process X(.) with the parameter space {α k , k in W}. We find the spectral representation of the covariance function of such a DT-SI process. By providing the harmonic-like representation of multi-dimensional self-similar processes, spectral density functions of them are presented. We assume that the process {X(t), t in R + } is also Markov in the wide sense and provide a discrete time scale invariant Markov (DT-SIM) process with the above scheme of sampling. We present an example of the DT-SIM process, simple Brownian motion, by the above sampling scheme and verify our results. Finally, we find the spectral density matrix of such a DT-SIM process and show that its associated T-dimensional self-similar Markov process is fully specified by {R H j (1), R j H (0), j = 0, 1, ..., T - 1}, where R H j (τ) is the covariance function of jth and (j + τ)th observations of the process.

  4. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    Science.gov (United States)

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  5. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando

    2017-01-01

    aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel

  6. High dimensional neurocomputing growth, appraisal and applications

    CERN Document Server

    Tripathi, Bipin Kumar

    2015-01-01

    The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...

  7. When high similarity copycats lose and moderate similarity copycats gain: The impact of comparative evaluation

    NARCIS (Netherlands)

    Van Horen, F.; Pieters, R.

    2012-01-01

    Copycats imitate features of leading brands to free ride on their equity. The prevailing belief is that the more similar copycats are to the leader brand, the more positive their evaluation is, and thus the more they free ride. Three studies demonstrate when the reverse holds true:

  8. When high similarity copycats lose and moderate similarity copycats gain : The impact of comparative evaluation

    NARCIS (Netherlands)

    van Horen, F.; Pieters, R.

    2012-01-01

    Copycats imitate features of leading brands to free ride on their equity. The prevailing belief is that the more similar copycats are to the leader brand, the more positive their evaluation is, and thus the more they free ride. Three studies demonstrate when the reverse holds true:

  9. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  10. Clustering high dimensional data using RIA

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.

  11. Highly conducting one-dimensional solids

    CERN Document Server

    Evrard, Roger; Doren, Victor

    1979-01-01

    Although the problem of a metal in one dimension has long been known to solid-state physicists, it was not until the synthesis of real one-dimensional or quasi-one-dimensional systems that this subject began to attract considerable attention. This has been due in part to the search for high­ temperature superconductivity and the possibility of reaching this goal with quasi-one-dimensional substances. A period of intense activity began in 1973 with the report of a measurement of an apparently divergent conduc­ tivity peak in TfF-TCNQ. Since then a great deal has been learned about quasi-one-dimensional conductors. The emphasis now has shifted from trying to find materials of very high conductivity to the many interesting problems of physics and chemistry involved. But many questions remain open and are still under active investigation. This book gives a review of the experimental as well as theoretical progress made in this field over the last years. All the chapters have been written by scientists who have ...

  12. Similarity from multi-dimensional scaling: solving the accuracy and diversity dilemma in information filtering.

    Directory of Open Access Journals (Sweden)

    Wei Zeng

    Full Text Available Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term.

  13. Similarity from multi-dimensional scaling: solving the accuracy and diversity dilemma in information filtering.

    Science.gov (United States)

    Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2014-01-01

    Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term.

  14. Introduction to high-dimensional statistics

    CERN Document Server

    Giraud, Christophe

    2015-01-01

    Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha

  15. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  16. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  17. Topology of high-dimensional manifolds

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)

    2002-08-15

    The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.

  18. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan

    2017-03-27

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  19. Modeling high dimensional multichannel brain signals

    KAUST Repository

    Hu, Lechuan; Fortin, Norbert; Ombao, Hernando

    2017-01-01

    In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.

  20. Three-dimensional object recognition using similar triangles and decision trees

    Science.gov (United States)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  1. Comparative Analysis of Mass Spectral Similarity Measures on Peak Alignment for Comprehensive Two-Dimensional Gas Chromatography Mass Spectrometry

    Science.gov (United States)

    2013-01-01

    Peak alignment is a critical procedure in mass spectrometry-based biomarker discovery in metabolomics. One of peak alignment approaches to comprehensive two-dimensional gas chromatography mass spectrometry (GC×GC-MS) data is peak matching-based alignment. A key to the peak matching-based alignment is the calculation of mass spectral similarity scores. Various mass spectral similarity measures have been developed mainly for compound identification, but the effect of these spectral similarity measures on the performance of peak matching-based alignment still remains unknown. Therefore, we selected five mass spectral similarity measures, cosine correlation, Pearson's correlation, Spearman's correlation, partial correlation, and part correlation, and examined their effects on peak alignment using two sets of experimental GC×GC-MS data. The results show that the spectral similarity measure does not affect the alignment accuracy significantly in analysis of data from less complex samples, while the partial correlation performs much better than other spectral similarity measures when analyzing experimental data acquired from complex biological samples. PMID:24151524

  2. Dimensionally similar studies of confinement and H-mode transition in ASDEX Upgrade and JET

    International Nuclear Information System (INIS)

    Ryter, F.; Stober, J.; Suttrop, W.

    2001-01-01

    Joint experiments on confinement and L-H transition were performed in ASDEX Upgrade and JET. The confinement experiments suggest that the invariance principle is not always fulfilled at high density. For the L-H transition studies, the dimensionless variables taken at the plasma edge can be in general only made identical per pair, due to the condition imposed by the L-H transition. This new approach to investigate the L-H physics suggests a weak dependence of the L-H transition mechanism on collisionality. (author)

  3. Dimensionally similar studies of confinement and H-mode transition in ASDEX Upgrade and JET

    International Nuclear Information System (INIS)

    Ryter, F.; Stober, J.; Suttrop, W.

    1999-01-01

    Joint experiments on confinement and L-H transition were performed in ASDEX Upgrade and JET. The confinement experiments suggest that the invariance principle is not always fulfilled at high density. For the L-H transition studies, the dimensionless variables taken at the plasma edge can be in general only made identical per pair, due to the condition imposed by the L-H transition. This new approach to investigate the L-H physics suggests a weak dependence of the L-H transition mechanism on collisionality. (author)

  4. Towards novel organic high-Tc superconductors: Data mining using density of states similarity search

    Science.gov (United States)

    Geilhufe, R. Matthias; Borysov, Stanislav S.; Kalpakchi, Dmytro; Balatsky, Alexander V.

    2018-02-01

    Identifying novel functional materials with desired key properties is an important part of bridging the gap between fundamental research and technological advancement. In this context, high-throughput calculations combined with data-mining techniques highly accelerated this process in different areas of research during the past years. The strength of a data-driven approach for materials prediction lies in narrowing down the search space of thousands of materials to a subset of prospective candidates. Recently, the open-access organic materials database OMDB was released providing electronic structure data for thousands of previously synthesized three-dimensional organic crystals. Based on the OMDB, we report about the implementation of a novel density of states similarity search tool which is capable of retrieving materials with similar density of states to a reference material. The tool is based on the approximate nearest neighbor algorithm as implemented in the ANNOY library and can be applied via the OMDB web interface. The approach presented here is wide ranging and can be applied to various problems where the density of states is responsible for certain key properties of a material. As the first application, we report about materials exhibiting electronic structure similarities to the aromatic hydrocarbon p-terphenyl which was recently discussed as a potential organic high-temperature superconductor exhibiting a transition temperature in the order of 120 K under strong potassium doping. Although the mechanism driving the remarkable transition temperature remains under debate, we argue that the density of states, reflecting the electronic structure of a material, might serve as a crucial ingredient for the observed high Tc. To provide candidates which might exhibit comparable properties, we present 15 purely organic materials with similar features to p-terphenyl within the electronic structure, which also tend to have structural similarities with p

  5. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  6. Self-organization phenomena and decaying self-similar state in two-dimensional incompressible viscous fluids

    International Nuclear Information System (INIS)

    Kondoh, Yoshiomi; Serizawa, Shunsuke; Nakano, Akihiro; Takahashi, Toshiki; Van Dam, James W.

    2004-01-01

    The final self-similar state of decaying two-dimensional (2D) turbulence in 2D incompressible viscous flow is analytically and numerically investigated for the case with periodic boundaries. It is proved by theoretical analysis and simulations that the sinh-Poisson state cω=-sinh(βψ) is not realized in the dynamical system of interest. It is shown by an eigenfunction spectrum analysis that a sufficient explanation for the self-organization to the decaying self-similar state is the faster energy decay of higher eigenmodes and the energy accumulation to the lowest eigenmode for given boundary conditions due to simultaneous normal and inverse cascading by nonlinear mode couplings. The theoretical prediction is demonstrated to be correct by simulations leading to the lowest eigenmode of {(1,0)+(0,1)} of the dissipative operator for the periodic boundaries. It is also clarified that an important process during nonlinear self-organization is an interchange between the dominant operators, which leads to the final decaying self-similar state

  7. Geomfinder: a multi-feature identifier of similar three-dimensional protein patterns: a ligand-independent approach.

    Science.gov (United States)

    Núñez-Vivanco, Gabriel; Valdés-Jiménez, Alejandro; Besoaín, Felipe; Reyes-Parada, Miguel

    2016-01-01

    Since the structure of proteins is more conserved than the sequence, the identification of conserved three-dimensional (3D) patterns among a set of proteins, can be important for protein function prediction, protein clustering, drug discovery and the establishment of evolutionary relationships. Thus, several computational applications to identify, describe and compare 3D patterns (or motifs) have been developed. Often, these tools consider a 3D pattern as that described by the residues surrounding co-crystallized/docked ligands available from X-ray crystal structures or homology models. Nevertheless, many of the protein structures stored in public databases do not provide information about the location and characteristics of ligand binding sites and/or other important 3D patterns such as allosteric sites, enzyme-cofactor interaction motifs, etc. This makes necessary the development of new ligand-independent methods to search and compare 3D patterns in all available protein structures. Here we introduce Geomfinder, an intuitive, flexible, alignment-free and ligand-independent web server for detailed estimation of similarities between all pairs of 3D patterns detected in any two given protein structures. We used around 1100 protein structures to form pairs of proteins which were assessed with Geomfinder. In these analyses each protein was considered in only one pair (e.g. in a subset of 100 different proteins, 50 pairs of proteins can be defined). Thus: (a) Geomfinder detected identical pairs of 3D patterns in a series of monoamine oxidase-B structures, which corresponded to the effectively similar ligand binding sites at these proteins; (b) we identified structural similarities among pairs of protein structures which are targets of compounds such as acarbose, benzamidine, adenosine triphosphate and pyridoxal phosphate; these similar 3D patterns are not detected using sequence-based methods; (c) the detailed evaluation of three specific cases showed the versatility

  8. Evaluating Clustering in Subspace Projections of High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Günnemann, Stephan; Assent, Ira

    2009-01-01

    Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...

  9. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  10. Detection of structurally similar adulterants in botanical dietary supplements by thin-layer chromatography and surface enhanced Raman spectroscopy combined with two-dimensional correlation spectroscopy.

    Science.gov (United States)

    Li, Hao; Zhu, Qing xia; Chwee, Tsz sian; Wu, Lin; Chai, Yi feng; Lu, Feng; Yuan, Yong fang

    2015-07-09

    Thin-layer chromatography (TLC) coupled with surface enhanced Raman spectroscopy (SERS) has been widely used for the study of various complex systems, especially for the detection of adulterants in botanical dietary supplements (BDS). However, this method is not sufficient to distinguish structurally similar adulterants in BDS since the analogs have highly similar chromatographic and/or spectroscopic behaviors. Taking into account the fact that higher cost and more time will be required for comprehensive chromatographic separation, more efforts with respect to spectroscopy are now focused on analyzing the overlapped SERS peaks. In this paper, the combination of a TLC-SERS method with two-dimensional correlation spectroscopy (2DCOS), with duration of exposure to laser as the perturbation, is applied to solve this problem. Besides the usual advantages of the TLC-SERS method, such as its simplicity, rapidness, and sensitivity, more advantages are presented here, such as enhanced selectivity and good reproducibility, which are obtained by 2DCOS. Two chemicals with similar structures are successfully differentiated from the complex BDS matrices. The study provides a more accurate qualitative screening method for detection of BDS with adulterants, and offers a new universal approach for the analysis of highly overlapped SERS peaks. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  12. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  13. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    Directory of Open Access Journals (Sweden)

    Thenmozhi Srinivasan

    2015-01-01

    Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.

  14. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    Science.gov (United States)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  15. High-dimensional data in economics and their (robust) analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  16. High-dimensional Data in Economics and their (Robust) Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  17. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  18. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    Science.gov (United States)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  19. Similarity of High-Resolution Tandem Mass Spectrometry Spectra of Structurally Related Micropollutants and Transformation Products

    Science.gov (United States)

    Schollée, Jennifer E.; Schymanski, Emma L.; Stravs, Michael A.; Gulde, Rebekka; Thomaidis, Nikolaos S.; Hollender, Juliane

    2017-12-01

    High-resolution tandem mass spectrometry (HRMS2) with electrospray ionization is frequently applied to study polar organic molecules such as micropollutants. Fragmentation provides structural information to confirm structures of known compounds or propose structures of unknown compounds. Similarity of HRMS2 spectra between structurally related compounds has been suggested to facilitate identification of unknown compounds. To test this hypothesis, the similarity of reference standard HRMS2 spectra was calculated for 243 pairs of micropollutants and their structurally related transformation products (TPs); for comparison, spectral similarity was also calculated for 219 pairs of unrelated compounds. Spectra were measured on Orbitrap and QTOF mass spectrometers and similarity was calculated with the dot product. The influence of different factors on spectral similarity [e.g., normalized collision energy (NCE), merging fragments from all NCEs, and shifting fragments by the mass difference of the pair] was considered. Spectral similarity increased at higher NCEs and highest similarity scores for related pairs were obtained with merged spectra including measured fragments and shifted fragments. Removal of the monoisotopic peak was critical to reduce false positives. Using a spectral similarity score threshold of 0.52, 40% of related pairs and 0% of unrelated pairs were above this value. Structural similarity was estimated with the Tanimoto coefficient and pairs with higher structural similarity generally had higher spectral similarity. Pairs where one or both compounds contained heteroatoms such as sulfur often resulted in dissimilar spectra. This work demonstrates that HRMS2 spectral similarity may indicate structural similarity and that spectral similarity can be used in the future to screen complex samples for related compounds such as micropollutants and TPs, assisting in the prioritization of non-target compounds. [Figure not available: see fulltext.

  20. Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning

    Science.gov (United States)

    Sagun, Levent

    This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would

  1. A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Hongchao Song

    2017-01-01

    Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.

  2. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  3. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  4. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....

  5. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  6. High-dimensional quantum cloning and applications to quantum hacking.

    Science.gov (United States)

    Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim

    2017-02-01

    Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.

  7. HSM: Heterogeneous Subspace Mining in High Dimensional Data

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Seidl, Thomas

    2009-01-01

    Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...

  8. Analysis of chaos in high-dimensional wind power system.

    Science.gov (United States)

    Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping

    2018-01-01

    A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.

  9. A Dissimilarity Measure for Clustering High- and Infinite Dimensional Data that Satisfies the Triangle Inequality

    Science.gov (United States)

    Socolovsky, Eduardo A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The cosine or correlation measures of similarity used to cluster high dimensional data are interpreted as projections, and the orthogonal components are used to define a complementary dissimilarity measure to form a similarity-dissimilarity measure pair. Using a geometrical approach, a number of properties of this pair is established. This approach is also extended to general inner-product spaces of any dimension. These properties include the triangle inequality for the defined dissimilarity measure, error estimates for the triangle inequality and bounds on both measures that can be obtained with a few floating-point operations from previously computed values of the measures. The bounds and error estimates for the similarity and dissimilarity measures can be used to reduce the computational complexity of clustering algorithms and enhance their scalability, and the triangle inequality allows the design of clustering algorithms for high dimensional distributed data.

  10. Color-Based Image Retrieval from High-Similarity Image Databases

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Carstensen, Jens Michael

    2003-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce...... a method for HSID retrieval using a similarity measure based on a linear combination of Jeffreys-Matusita (JM) distances between distributions of color (and color derivatives) estimated from a set of automatically extracted image regions. The weight coefficients are estimated based on optimal retrieval...... performance. Experimental results on the difficult task of visually identifying clones of fungal colonies grown in a petri dish and categorization of pelts show a high retrieval accuracy of the method when combined with standardized sample preparation and image acquisition....

  11. A hybridized K-means clustering approach for high dimensional ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.

  12. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  13. On Robust Information Extraction from High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science

  14. Inference in High-dimensional Dynamic Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Tang, Haihan

    We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...

  15. Pricing High-Dimensional American Options Using Local Consistency Conditions

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and

  16. Irregular grid methods for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.

    2004-01-01

    This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of

  17. A discriminative structural similarity measure and its application to video-volume registration for endoscope three-dimensional motion tracking.

    Science.gov (United States)

    Luo, Xiongbiao; Mori, Kensaku

    2014-06-01

    Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.

  18. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    Science.gov (United States)

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  19. Functional enrichment analyses and construction of functional similarity networks with high confidence function prediction by PFP

    Directory of Open Access Journals (Sweden)

    Kihara Daisuke

    2010-05-01

    Full Text Available Abstract Background A new paradigm of biological investigation takes advantage of technologies that produce large high throughput datasets, including genome sequences, interactions of proteins, and gene expression. The ability of biologists to analyze and interpret such data relies on functional annotation of the included proteins, but even in highly characterized organisms many proteins can lack the functional evidence necessary to infer their biological relevance. Results Here we have applied high confidence function predictions from our automated prediction system, PFP, to three genome sequences, Escherichia coli, Saccharomyces cerevisiae, and Plasmodium falciparum (malaria. The number of annotated genes is increased by PFP to over 90% for all of the genomes. Using the large coverage of the function annotation, we introduced the functional similarity networks which represent the functional space of the proteomes. Four different functional similarity networks are constructed for each proteome, one each by considering similarity in a single Gene Ontology (GO category, i.e. Biological Process, Cellular Component, and Molecular Function, and another one by considering overall similarity with the funSim score. The functional similarity networks are shown to have higher modularity than the protein-protein interaction network. Moreover, the funSim score network is distinct from the single GO-score networks by showing a higher clustering degree exponent value and thus has a higher tendency to be hierarchical. In addition, examining function assignments to the protein-protein interaction network and local regions of genomes has identified numerous cases where subnetworks or local regions have functionally coherent proteins. These results will help interpreting interactions of proteins and gene orders in a genome. Several examples of both analyses are highlighted. Conclusion The analyses demonstrate that applying high confidence predictions from PFP

  20. High-intensity discharge lamp and Duffing oscillator—Similarities and differences

    Science.gov (United States)

    Baumann, Bernd; Schwieger, Joerg; Stein, Ulrich; Hallerberg, Sarah; Wolff, Marcus

    2017-12-01

    The processes inside the arc tube of high-intensity discharge lamps are investigated using finite element simulations. The behavior of the gas mixture inside the arc tube is governed by differential equations describing mass, energy, and charge conservation, as well as the Helmholtz equation for the acoustic pressure and the Reynolds equations for the flow driven by buoyancy and Reynolds stresses. The model is highly nonlinear and requires a recursion procedure to account for the impact of acoustic streaming on the temperature and other fields. The investigations reveal the presence of a hysteresis and the corresponding jump phenomenon, quite similar to a Duffing oscillator. The similarities and, in particular, the differences of the nonlinear behavior of the high-intensity discharge lamp to that of a Duffing oscillator are discussed. For large amplitudes, the high-intensity discharge lamp exhibits a stiffening effect in contrast to the Duffing oscillator. It is speculated on how the stiffening might affect hysteresis suppression.

  1. Deja vu: a database of highly similar citations in the scientific literature.

    Science.gov (United States)

    Errami, Mounir; Sun, Zhaohui; Long, Tara C; George, Angela C; Garner, Harold R

    2009-01-01

    In the scientific research community, plagiarism and covert multiple publications of the same data are considered unacceptable because they undermine the public confidence in the scientific integrity. Yet, little has been done to help authors and editors to identify highly similar citations, which sometimes may represent cases of unethical duplication. For this reason, we have made available Déjà vu, a publicly available database of highly similar Medline citations identified by the text similarity search engine eTBLAST. Following manual verification, highly similar citation pairs are classified into various categories ranging from duplicates with different authors to sanctioned duplicates. Déjà vu records also contain user-provided commentary and supporting information to substantiate each document's categorization. Déjà vu and eTBLAST are available to authors, editors, reviewers, ethicists and sociologists to study, intercept, annotate and deter questionable publication practices. These tools are part of a sustained effort to enhance the quality of Medline as 'the' biomedical corpus. The Déjà vu database is freely accessible at http://spore.swmed.edu/dejavu. The tool eTBLAST is also freely available at http://etblast.org.

  2. Genuinely high-dimensional nonlocality optimized by complementary measurements

    International Nuclear Information System (INIS)

    Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung

    2010-01-01

    Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.

  3. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    International Nuclear Information System (INIS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-01-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  4. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Science.gov (United States)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  5. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  6. Investigation on the effect of nonlinear processes on similarity law in high-pressure argon discharges

    Science.gov (United States)

    Fu, Yangyang; Parsey, Guy M.; Verboncoeur, John P.; Christlieb, Andrew J.

    2017-11-01

    In this paper, the effect of nonlinear processes (such as three-body collisions and stepwise ionizations) on the similarity law in high-pressure argon discharges has been studied by the use of the Kinetic Global Model framework. In the discharge model, the ground state argon atoms (Ar), electrons (e), atom ions (Ar+), molecular ions (Ar2+), and fourteen argon excited levels Ar*(4s and 4p) are considered. The steady-state electron and ion densities are obtained with nonlinear processes included and excluded in the designed models, respectively. It is found that in similar gas gaps, keeping the product of gas pressure and linear dimension unchanged, with the nonlinear processes included, the normalized density relations deviate from the similarity relations gradually as the scale-up factor decreases. Without the nonlinear processes, the parameter relations are in good agreement with the similarity law predictions. Furthermore, the pressure and the dimension effects are also investigated separately with and without the nonlinear processes. It is shown that the gas pressure effect on the results is less obvious than the dimension effect. Without the nonlinear processes, the pressure and the dimension effects could be estimated from one to the other based on the similarity relations.

  7. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  8. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...

  9. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  10. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  11. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  12. High-dimensional change-point estimation: Combining filtering with convex optimization

    OpenAIRE

    Soh, Yong Sheng; Chandrasekaran, Venkat

    2017-01-01

    We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...

  13. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  14. Manifold learning to interpret JET high-dimensional operational space

    International Nuclear Information System (INIS)

    Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A

    2013-01-01

    In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)

  15. Nonlinear theory for axisymmetric self-similar two-dimensional oscillations of electrons in cold plasma with constant proton background

    Science.gov (United States)

    Osherovich, V. A.; Fainberg, J.

    2018-01-01

    We consider simultaneous oscillations of electrons moving both along the axis of symmetry and also in the direction perpendicular to the axis. We derive a system of three nonlinear ordinary differential equations which describe self-similar oscillations of cold electrons in a constant proton density background (np = n0 = constant). These three equations represent an exact class of solutions. For weak nonlinear conditions, the frequency spectra of electric field oscillations exhibit split frequency behavior at the Langmuir frequency ωp0 and its harmonics, as well as presence of difference frequencies at low spectral values. For strong nonlinear conditions, the spectra contain peaks at frequencies with values ωp0(n +m √{2 }) , where n and m are integer numbers (positive and negative). We predict that both spectral types (weak and strong) should be observed in plasmas where axial symmetry may exist. To illustrate possible applications of our theory, we present a spectrum of electric field oscillations observed in situ in the solar wind by the WAVES experiment on the Wind spacecraft during the passage of a type III solar radio burst.

  16. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  17. Method of synthesis of abstract images with high self-similarity

    Science.gov (United States)

    Matveev, Nikolay V.; Shcheglov, Sergey A.; Romanova, Galina E.; Koneva, Ð.¢atiana A.

    2017-06-01

    Abstract images with high self-similarity could be used for drug-free stress therapy. This based on the fact that a complex visual environment has a high affective appraisal. To create such an image we can use the setup based on the three laser sources of small power and different colors (Red, Green, Blue), the image is the pattern resulting from the reflecting and refracting by the complicated form object placed into the laser ray paths. The images were obtained experimentally which showed the good therapy effect. However, to find and to choose the object which gives needed image structure is very difficult and requires many trials. The goal of the work is to develop a method and a procedure of finding the object form which if placed into the ray paths can provide the necessary structure of the image In fact the task means obtaining the necessary irradiance distribution on the given surface. Traditionally such problems are solved using the non-imaging optics methods. In the given case this task is very complicated because of the complicated structure of the illuminance distribution and its high non-linearity. Alternative way is to use the projected image of a mask with a given structure. We consider both ways and discuss how they can help to speed up the synthesis procedure for the given abstract image of the high self-similarity for the setups of drug-free therapy.

  18. Elucidating high-dimensional cancer hallmark annotation via enriched ontology.

    Science.gov (United States)

    Yan, Shankai; Wong, Ka-Chun

    2017-09-01

    Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Benfotiamine is similar to thiamine in correcting endothelial cell defects induced by high glucose.

    Science.gov (United States)

    Pomero, F; Molinar Min, A; La Selva, M; Allione, A; Molinatti, G M; Porta, M

    2001-01-01

    We investigated the hypothesis that benfotiamine, a lipophilic derivative of thiamine, affects replication delay and generation of advanced glycosylation end-products (AGE) in human umbilical vein endothelial cells cultured in the presence of high glucose. Cells were grown in physiological (5.6 mM) and high (28.0 mM) concentrations of D-glucose, with and without 150 microM thiamine or benfotiamine. Cell proliferation was measured by mitochondrial dehydrogenase activity. AGE generation after 20 days was assessed fluorimetrically. Cell replication was impaired by high glucose (72.3%+/-5.1% of that in physiological glucose, p=0.001). This was corrected by the addition of either thiamine (80.6%+/-2.4%, p=0.005) or benfotiamine (87.5%+/-8.9%, p=0.006), although it not was completely normalized (p=0.001 and p=0.008, respectively) to that in physiological glucose. Increased AGE production in high glucose (159.7%+/-38.9% of fluorescence in physiological glucose, p=0.003) was reduced by thiamine (113.2%+/-16.3%, p=0.008 vs. high glucose alone) or benfotiamine (135.6%+/-49.8%, p=0.03 vs. high glucose alone) to levels similar to those observed in physiological glucose. Benfotiamine, a derivative of thiamine with better bioavailability, corrects defective replication and increased AGE generation in endothelial cells cultured in high glucose, to a similar extent as thiamine. These effects may result from normalization of accelerated glycolysis and the consequent decrease in metabolites that are extremely active in generating nonenzymatic protein glycation. The potential role of thiamine administration in the prevention or treatment of vascular complications of diabetes deserves further investigation.

  20. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    Science.gov (United States)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  1. Evidence for Deep Regulatory Similarities in Early Developmental Programs across Highly Diverged Insects

    Science.gov (United States)

    Zhang, Yinan; Samee, Md. Abul Hassan; Halfon, Marc S.; Sinha, Saurabh

    2014-01-01

    Many genes familiar from Drosophila development, such as the so-called gap, pair-rule, and segment polarity genes, play important roles in the development of other insects and in many cases appear to be deployed in a similar fashion, despite the fact that Drosophila-like “long germband” development is highly derived and confined to a subset of insect families. Whether or not these similarities extend to the regulatory level is unknown. Identification of regulatory regions beyond the well-studied Drosophila has been challenging as even within the Diptera (flies, including mosquitoes) regulatory sequences have diverged past the point of recognition by standard alignment methods. Here, we demonstrate that methods we previously developed for computational cis-regulatory module (CRM) discovery in Drosophila can be used effectively in highly diverged (250–350 Myr) insect species including Anopheles gambiae, Tribolium castaneum, Apis mellifera, and Nasonia vitripennis. In Drosophila, we have successfully used small sets of known CRMs as “training data” to guide the search for other CRMs with related function. We show here that although species-specific CRM training data do not exist, training sets from Drosophila can facilitate CRM discovery in diverged insects. We validate in vivo over a dozen new CRMs, roughly doubling the number of known CRMs in the four non-Drosophila species. Given the growing wealth of Drosophila CRM annotation, these results suggest that extensive regulatory sequence annotation will be possible in newly sequenced insects without recourse to costly and labor-intensive genome-scale experiments. We develop a new method, Regulus, which computes a probabilistic score of similarity based on binding site composition (despite the absence of nucleotide-level sequence alignment), and demonstrate similarity between functionally related CRMs from orthologous loci. Our work represents an important step toward being able to trace the evolutionary

  2. Similarity between the superconductivity in the graphene with the spin transport in the two-dimensional antiferromagnet in the honeycomb lattice

    Science.gov (United States)

    Lima, L. S.

    2017-02-01

    We have used the Dirac's massless quasi-particles together with the Kubo's formula to study the spin transport by electrons in the graphene monolayer. We have calculated the electric conductivity and verified the behavior of the AC and DC currents of this system, that is a relativistic electron plasma. Our results show that the AC conductivity tends to infinity in the limit ω → 0 , similar to the behavior obtained for the spin transport in the two-dimensional frustrated antiferromagnet in the honeycomb lattice. We have made a diagrammatic expansion for the Green's function and we have not gotten significative change in the results.

  3. Scaling and interaction of self-similar modes in models of high Reynolds number wall turbulence.

    Science.gov (United States)

    Sharma, A S; Moarref, R; McKeon, B J

    2017-03-13

    Previous work has established the usefulness of the resolvent operator that maps the terms nonlinear in the turbulent fluctuations to the fluctuations themselves. Further work has described the self-similarity of the resolvent arising from that of the mean velocity profile. The orthogonal modes provided by the resolvent analysis describe the wall-normal coherence of the motions and inherit that self-similarity. In this contribution, we present the implications of this similarity for the nonlinear interaction between modes with different scales and wall-normal locations. By considering the nonlinear interactions between modes, it is shown that much of the turbulence scaling behaviour in the logarithmic region can be determined from a single arbitrarily chosen reference plane. Thus, the geometric scaling of the modes is impressed upon the nonlinear interaction between modes. Implications of these observations on the self-sustaining mechanisms of wall turbulence, modelling and simulation are outlined.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  4. Three-Dimensional Electromagnetic High Frequency Axisymmetric Cavity Scars.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt

    2014-10-01

    This report examines the localization of high frequency electromagnetic fi elds in three-dimensional axisymmetric cavities along periodic paths between opposing sides of the cavity. The cases where these orbits lead to unstable localized modes are known as scars. This report treats both the case where the opposing sides, or mirrors, are convex, where there are no interior foci, and the case where they are concave, leading to interior foci. The scalar problem is treated fi rst but the approximations required to treat the vector fi eld components are also examined. Particular att ention is focused on the normalization through the electromagnetic energy theorem. Both projections of the fi eld along the scarred orbit as well as point statistics are examined. Statistical comparisons are m ade with a numerical calculation of the scars run with an axisymmetric simulation. This axisymmetric cas eformstheoppositeextreme(wherethetwomirror radii at each end of the ray orbit are equal) from the two -dimensional solution examined previously (where one mirror radius is vastly di ff erent from the other). The enhancement of the fi eldontheorbitaxiscanbe larger here than in the two-dimensional case. Intentionally Left Blank

  5. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  6. Hawking radiation of a high-dimensional rotating black hole

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)

    2010-01-15

    We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)

  7. The additive hazards model with high-dimensional regressors

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...

  8. High-dimensional quantum channel estimation using classical light

    CSIR Research Space (South Africa)

    Mabena, Chemist M

    2017-11-01

    Full Text Available stream_source_info Mabena_20007_2017.pdf.txt stream_content_type text/plain stream_size 960 Content-Encoding UTF-8 stream_name Mabena_20007_2017.pdf.txt Content-Type text/plain; charset=UTF-8 PHYSICAL REVIEW A 96, 053860... (2017) High-dimensional quantum channel estimation using classical light Chemist M. Mabena CSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa and School of Physics, University of the Witwatersrand, Johannesburg 2000, South...

  9. Data analysis in high-dimensional sparse spaces

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder

    classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...

  10. High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems

    International Nuclear Information System (INIS)

    Wachowiak, M P; Sarlo, B B; Foster, A E Lambe

    2014-01-01

    Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task

  11. Evidence for deep regulatory similarities in early developmental programs across highly diverged insects.

    Science.gov (United States)

    Kazemian, Majid; Suryamohan, Kushal; Chen, Jia-Yu; Zhang, Yinan; Samee, Md Abul Hassan; Halfon, Marc S; Sinha, Saurabh

    2014-09-01

    Many genes familiar from Drosophila development, such as the so-called gap, pair-rule, and segment polarity genes, play important roles in the development of other insects and in many cases appear to be deployed in a similar fashion, despite the fact that Drosophila-like "long germband" development is highly derived and confined to a subset of insect families. Whether or not these similarities extend to the regulatory level is unknown. Identification of regulatory regions beyond the well-studied Drosophila has been challenging as even within the Diptera (flies, including mosquitoes) regulatory sequences have diverged past the point of recognition by standard alignment methods. Here, we demonstrate that methods we previously developed for computational cis-regulatory module (CRM) discovery in Drosophila can be used effectively in highly diverged (250-350 Myr) insect species including Anopheles gambiae, Tribolium castaneum, Apis mellifera, and Nasonia vitripennis. In Drosophila, we have successfully used small sets of known CRMs as "training data" to guide the search for other CRMs with related function. We show here that although species-specific CRM training data do not exist, training sets from Drosophila can facilitate CRM discovery in diverged insects. We validate in vivo over a dozen new CRMs, roughly doubling the number of known CRMs in the four non-Drosophila species. Given the growing wealth of Drosophila CRM annotation, these results suggest that extensive regulatory sequence annotation will be possible in newly sequenced insects without recourse to costly and labor-intensive genome-scale experiments. We develop a new method, Regulus, which computes a probabilistic score of similarity based on binding site composition (despite the absence of nucleotide-level sequence alignment), and demonstrate similarity between functionally related CRMs from orthologous loci. Our work represents an important step toward being able to trace the evolutionary history of gene

  12. Simulations of dimensionally reduced effective theories of high temperature QCD

    CERN Document Server

    Hietanen, Ari

    Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by perf...

  13. High-Dimensional Quantum Information Processing with Linear Optics

    Science.gov (United States)

    Fitzpatrick, Casey A.

    Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for

  14. Management of high blood pressure in children: similarities and differences between US and European guidelines.

    Science.gov (United States)

    Brady, Tammy M; Stefani-Glücksberg, Amalia; Simonetti, Giacomo D

    2018-03-28

    Over the last several decades, many seminal longitudinal cohort studies have clearly shown that the antecedents to adult disease have their origins in childhood. Hypertension (HTN), which has become increasingly prevalent in childhood, represents one of the most important risk factors for cardiovascular diseases (CVD) such as heart disease and stroke. With the risk of adult HTN much greater when HTN is manifest in childhood, the future burden of CVD worldwide is therefore concerning. In an effort to slow the current trajectory, professional societies have called for more rigorous, evidence-based guideline development to aid primary care providers and subspecialists in improving recognition, diagnosis, evaluation, and treatment of pediatric HTN. In 2016 the European Society of Hypertension and in 2017 the American Academy of Pediatrics published updated guidelines for prevention and management of high blood pressure (BP) in children. While there are many similarities between the two guidelines, important differences exist. These differences, along with the identified knowledge gaps in each, will hopefully spur clinical researchers to action. This review highlights some of these similarities and differences, focusing on several of the more important facets regarding prevalence, prevention, diagnosis, management, and treatment of childhood HTN.

  15. Masturbation Experiences of Swedish Senior High School Students: Gender Differences and Similarities.

    Science.gov (United States)

    Driemeyer, Wiebke; Janssen, Erick; Wiltfang, Jens; Elmerstig, Eva

    Research about masturbation tends to be limited to the assessment of masturbation incidence and frequency. Consequently, little is known about what people experience connected to masturbation. This might be one reason why theoretical approaches that specifically address the persistent gender gap in masturbation frequency are lacking. The aim of the current study was to explore several aspects of masturbation in young men and women, and to examine possible associations with their social backgrounds and sexual histories. Data from 1,566 women and 1,452 men (ages 18 to 22) from 52 Swedish senior high schools were analyzed. Comparisons between men and women were made regarding incidence of and age at first masturbation, the use of objects (e.g., sex toys), fantasies, and sexual functioning during masturbation, as well as about their attitudes toward masturbation and sexual fantasies. Cluster analysis was carried out to identify similarities between and differences within the gender groups. While overall more men than women reported experience with several of the investigated aspects, cluster analyses revealed that a large proportion of men and women reported similar experiences and that fewer experiences are not necessarily associated with negative attitudes toward masturbation. Implications of these findings are discussed in consideration of particular social backgrounds.

  16. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  17. High-dimensional single-cell cancer biology.

    Science.gov (United States)

    Irish, Jonathan M; Doxie, Deon B

    2014-01-01

    Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.

  18. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  19. Class prediction for high-dimensional class-imbalanced data

    Directory of Open Access Journals (Sweden)

    Lusa Lara

    2010-10-01

    Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class

  20. Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model

    Directory of Open Access Journals (Sweden)

    Salha M. Alzahrani

    2015-07-01

    Full Text Available Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and suspicious texts of short lengths, which implement the semantic relatedness between words as a membership function to a fuzzy set. In order to minimize the number of false positives and false negatives, a learning method that combines a permission threshold and a variation threshold is used to decide true plagiarism cases. The proposed model and the baselines are evaluated on 99,033 ground-truth annotated cases extracted from different datasets, including 11,621 (11.7% handmade paraphrases, 54,815 (55.4% artificial plagiarism cases, and 32,578 (32.9% plagiarism-free cases. We conduct extensive experimental verifications, including the study of the effects of different segmentations schemes and parameter settings. Results are assessed using precision, recall, F-measure and granularity on stratified 10-fold cross-validation data. The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism. Additionally, the analysis of variance (ANOVA statistical test shows the effectiveness of different segmentation schemes used with the proposed approach.

  1. High-dimensional quantum cryptography with twisted light

    International Nuclear Information System (INIS)

    Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J

    2015-01-01

    Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)

  2. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    Science.gov (United States)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  3. Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian

    2011-01-01

    The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...

  4. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  5. Quantum correlation of high dimensional system in a dephasing environment

    Science.gov (United States)

    Ji, Yinghua; Ke, Qiang; Hu, Juju

    2018-05-01

    For a high dimensional spin-S system embedded in a dephasing environment, we theoretically analyze the time evolutions of quantum correlation and entanglement via Frobenius norm and negativity. The quantum correlation dynamics can be considered as a function of the decoherence parameters, including the ratio between the system oscillator frequency ω0 and the reservoir cutoff frequency ωc , and the different environment temperature. It is shown that the quantum correlation can not only measure nonclassical correlation of the considered system, but also perform a better robustness against the dissipation. In addition, the decoherence presents the non-Markovian features and the quantum correlation freeze phenomenon. The former is much weaker than that in the sub-Ohmic or Ohmic thermal reservoir environment.

  6. Similarity analysis for the high-pressure inductively coupled plasma source

    International Nuclear Information System (INIS)

    Vanden-Abeele, D; Degrez, G

    2004-01-01

    It is well known that the optimal operating parameters of an inductively coupled plasma (ICP) torch strongly depend upon its dimensions. To understand this relationship better, we derive a dimensionless form of the equations governing the behaviour of high-pressure ICPs. The requirement of similarity then naturally leads to expressions for the operating parameters as a function of the plasma radius. In addition to the well-known scaling law for frequency, surprising results appear for the dependence of the mass flow rate, dissipated power and operating pressure upon the plasma radius. While the obtained laws do not appear to be in good agreement with empirical results in the literature, their correctness is supported by detailed numerical calculations of ICP sources of varying diameters. The approximations of local thermodynamic equilibrium and negligible radiative losses restrict the validity of our results and can be responsible for the disagreement with empirical data. The derived scaling laws are useful for the design of new plasma torches and may provide explanations for the unsteadiness observed in certain existing ICP sources

  7. Characterization of CG6178 gene product with high sequence similarity to firefly luciferase in Drosophila melanogaster.

    Science.gov (United States)

    Oba, Yuichi; Ojika, Makoto; Inouye, Satoshi

    2004-03-31

    This is the first identification of a long-chain fatty acyl-CoA synthetase in Drosophila by enzymatic characterization. The gene product of CG6178 (CG6178) in Drosophila melanogaster genome, which has a high sequence similarity to firefly luciferase, has been expressed and characterized. CG6178 showed long-chain fatty acyl-CoA synthetic activity in the presence of ATP, CoA and Mg(2+), suggesting a fatty acyl adenylate is an intermediate. Recently, it was revealed that firefly luciferase has two catalytic functions, monooxygenase (luciferase) and AMP-mediated CoA ligase (fatty acyl-CoA synthetase). However, unlike firefly luciferase, CG6178 did not show luminescence activity in the presence of firefly luciferin, ATP, CoA and Mg(2+). The enzymatic properties of CG6178 including substrate specificity, pH dependency and optimal temperature were close to those of firefly luciferase and rat fatty acyl-CoA synthetase. Further, phylogenic analyses strongly suggest that the firefly luciferase gene may have evolved from a fatty acyl-CoA synthetase gene as a common ancestral gene.

  8. High doses of dextromethorphan, an NMDA antagonist, produce effects similar to classic hallucinogens

    Science.gov (United States)

    Carter, Lawrence P.; Johnson, Matthew W.; Mintzer, Miriam Z.; Klinedinst, Margaret A.; Griffiths, Roland R.

    2013-01-01

    Rationale Although reports of dextromethorphan (DXM) abuse have increased recently, few studies have examined the effects of high doses of DXM. Objective This study in humans evaluated the effects of supratherapeutic doses of DXM and triazolam. Methods Single, acute, oral doses of DXM (100, 200, 300, 400, 500, 600, 700, 800 mg/70 kg), triazolam (0.25, 0.5 mg/70kg), and placebo were administered to twelve healthy volunteers with histories of hallucinogen use, under double-blind conditions, using an ascending dose run-up design. Subjective, behavioral, and physiological effects were assessed repeatedly after drug administration for 6 hours. Results Triazolam produced dose-related increases in subject-rated sedation, observer-rated sedation, and behavioral impairment. DXM produced a profile of dose-related physiological and subjective effects differing from triazolam. DXM effects included increases in blood pressure, heart rate, and emesis, increases in observer-rated effects typical of classic hallucinogens (e.g. distance from reality, visual effects with eyes open and closed, joy, anxiety), and participant ratings of stimulation (e.g. jittery, nervous), somatic effects (e.g. tingling, headache), perceptual changes, end-of-session drug liking, and mystical-type experience. After 400 mg/70kg DXM, 11 of 12 participants indicated on a pharmacological class questionnaire that they thought they had received a classic hallucinogen (e.g. psilocybin). Drug effects resolved without significant adverse effects by the end of the session. In a 1-month follow up volunteers attributed increased spirituality and positive changes in attitudes, moods, and behavior to the session experiences. Conclusions High doses of DXM produced effects distinct from triazolam and had characteristics that were similar to the classic hallucinogen psilocybin. PMID:22526529

  9. Statistical mechanics of complex neural systems and high dimensional data

    International Nuclear Information System (INIS)

    Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya

    2013-01-01

    Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. (paper)

  10. Total and high molecular weight adiponectin have similar utility for the identification of insulin resistance

    Directory of Open Access Journals (Sweden)

    Aguilar-Salinas Carlos A

    2010-06-01

    Full Text Available Abstract Background Insulin resistance (IR and related metabolic disturbances are characterized by low levels of adiponectin. High molecular weight adiponectin (HMWA is considered the active form of adiponectin and a better marker of IR than total adiponectin. The objective of this study is to compare the utility of total adiponectin, HMWA and the HMWA/total adiponectin index (SA index for the identification of IR and related metabolic conditions. Methods A cross-sectional analysis was performed in a group of ambulatory subjects, aged 20 to 70 years, in Mexico City. Areas under the receiver operator characteristic (ROC curve for total, HMWA and the SA index were plotted for the identification of metabolic disturbances. Sensitivity and specificity, positive and negative predictive values, and accuracy for the identification of IR were calculated. Results The study included 101 men and 168 women. The areas under the ROC curve for total and HMWA for the identification of IR (0.664 vs. 0.669, P = 0.74, obesity (0.592 vs. 0.610, P = 0.32, hypertriglyceridemia (0.661 vs. 0.671, P = 0.50 and hypoalphalipoproteinemia (0.624 vs. 0.633, P = 0.58 were similar. A total adiponectin level of 8.03 μg/ml was associated with a sensitivity of 57.6%, a specificity of 65.9%, a positive predictive value of 50.0%, a negative predictive value of 72.4%, and an accuracy of 62.7% for the diagnosis of IR. The corresponding figures for a HMWA value of 4.25 μg/dl were 59.6%, 67.1%, 51.8%, 73.7% and 64.2%. The area under the ROC curve of the SA index for the identification of IR was 0.622 [95% CI 0.554-0.691], obesity 0.613 [95% CI 0.536-0.689], hypertriglyceridemia 0.616 [95% CI 0.549-0.683], and hypoalphalipoproteinemia 0.606 [95% CI 0.535-0.677]. Conclusions Total adiponectin, HMWA and the SA index had similar utility for the identification of IR and metabolic disturbances.

  11. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    International Nuclear Information System (INIS)

    Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang

    2016-01-01

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  12. Quasi-two-dimensional metallic hydrogen in diphosphide at a high pressure

    International Nuclear Information System (INIS)

    Degtyarenko, N. N.; Mazur, E. A.

    2016-01-01

    The structural, electronic, phonon, and other characteristics of the normal phases of phosphorus hydrides with stoichiometry PH k are analyzed. The properties of the initial substance, namely, diphosphine are calculated. In contrast to phosphorus hydrides with stoichiometry PH 3 , a quasi-two-dimensional phosphorus-stabilized lattice of metallic hydrogen can be formed in this substance during hydrostatic compression at a high pressure. The formed structure with H–P–H elements is shown to be locally stable in phonon spectrum, i.e., to be metastable. The properties of diphosphine are compared with the properties of similar structures of sulfur hydrides.

  13. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Hoang, Ngoc-Tram D. [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Nguyen, Duy-Anh P. [Department of Natural Science, Thu Dau Mot University, 6, Tran Van On Street, Thu Dau Mot City, Binh Duong Province (Viet Nam); Hoang, Van-Hung [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Le, Van-Hoang, E-mail: levanhoang@tdt.edu.vn [Atomic Molecular and Optical Physics Research Group, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Faculty of Applied Sciences, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam)

    2016-08-15

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  14. Quasi-two-dimensional metallic hydrogen in diphosphide at a high pressure

    Energy Technology Data Exchange (ETDEWEB)

    Degtyarenko, N. N.; Mazur, E. A., E-mail: eugen-mazur@mail.ru [National Research Nuclear University MEPhI (Russian Federation)

    2016-08-15

    The structural, electronic, phonon, and other characteristics of the normal phases of phosphorus hydrides with stoichiometry PH{sub k} are analyzed. The properties of the initial substance, namely, diphosphine are calculated. In contrast to phosphorus hydrides with stoichiometry PH{sub 3}, a quasi-two-dimensional phosphorus-stabilized lattice of metallic hydrogen can be formed in this substance during hydrostatic compression at a high pressure. The formed structure with H–P–H elements is shown to be locally stable in phonon spectrum, i.e., to be metastable. The properties of diphosphine are compared with the properties of similar structures of sulfur hydrides.

  15. Gender Similarities in Math Performance from Middle School through High School

    Science.gov (United States)

    Scafidi, Tony; Bui, Khanh

    2010-01-01

    Using data from 10 states, Hyde, Lindberg, Linn, Ellis, and Williams (2008) found gender similarities in performance on standardized math tests. The present study attempted to replicate this finding with national data and to extend it by examining whether gender similarities in math performance are moderated by race, socioeconomic status, or math…

  16. Similar health benefits of endurance and high-intensity interval training in obese children.

    Directory of Open Access Journals (Sweden)

    Ana Carolina Corte de Araujo

    Full Text Available PURPOSE: To compare two modalities of exercise training (i.e., Endurance Training [ET] and High-Intensity Interval Training [HIT] on health-related parameters in obese children aged between 8 and 12 years. METHODS: Thirty obese children were randomly allocated into either the ET or HIT group. The ET group performed a 30 to 60-minute continuous exercise at 80% of the peak heart rate (HR. The HIT group training performed 3 to 6 sets of 60-s sprint at 100% of the peak velocity interspersed by a 3-min active recovery period at 50% of the exercise velocity. HIT sessions last ~70% less than ET sessions. At baseline and after 12 weeks of intervention, aerobic fitness, body composition and metabolic parameters were assessed. RESULTS: BOTH THE ABSOLUTE (ET: 26.0%; HIT: 19.0% and the relative VO(2 peak (ET: 13.1%; HIT: 14.6% were significantly increased in both groups after the intervention. Additionally, the total time of exercise (ET: 19.5%; HIT: 16.4% and the peak velocity during the maximal graded cardiorespiratory test (ET: 16.9%; HIT: 13.4% were significantly improved across interventions. Insulinemia (ET: 29.4%; HIT: 30.5% and HOMA-index (ET: 42.8%; HIT: 37.0% were significantly lower for both groups at POST when compared to PRE. Body mass was significantly reduced in the HIT (2.6%, but not in the ET group (1.2%. A significant reduction in BMI was observed for both groups after the intervention (ET: 3.0%; HIT: 5.0%. The responsiveness analysis revealed a very similar pattern of the most responsive variables among groups. CONCLUSION: HIT and ET were equally effective in improving important health related parameters in obese youth.

  17. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus

    2013-11-12

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  18. Design guidelines for high dimensional stability of CFRP optical bench

    Science.gov (United States)

    Desnoyers, Nichola; Boucher, Marc-André; Goyette, Philippe

    2013-09-01

    In carbon fiber reinforced plastic (CFRP) optomechanical structures, particularly when embodying reflective optics, angular stability is critical. Angular stability or warping stability is greatly affected by moisture absorption and thermal gradients. Unfortunately, it is impossible to achieve the perfect laminate and there will always be manufacturing errors in trying to reach a quasi-iso laminate. Some errors, such as those related to the angular position of each ply and the facesheet parallelism (for a bench) can be easily monitored in order to control the stability more adequately. This paper presents warping experiments and finite-element analyses (FEA) obtained from typical optomechanical sandwich structures. Experiments were done using a thermal vacuum chamber to cycle the structures from -40°C to 50°C. Moisture desorption tests were also performed for a number of specific configurations. The selected composite material for the study is the unidirectional prepreg from Tencate M55J/TC410. M55J is a high modulus fiber and TC410 is a new-generation cyanate ester designed for dimensionally stable optical benches. In the studied cases, the main contributors were found to be: the ply angular errors, laminate in-plane parallelism (between 0° ply direction of both facesheets), fiber volume fraction tolerance and joints. Final results show that some tested configurations demonstrated good warping stability. FEA and measurements are in good agreement despite the fact that some defects or fabrication errors remain unpredictable. Design guidelines to maximize the warping stability by taking into account the main dimensional stability contributors, the bench geometry and the optical mount interface are then proposed.

  19. Approximation of High-Dimensional Rank One Tensors

    KAUST Repository

    Bachmayr, Markus; Dahmen, Wolfgang; DeVore, Ronald; Grasedyck, Lars

    2013-01-01

    Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called 'curse of dimensionality'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.

  20. A qualitative numerical study of high dimensional dynamical systems

    Science.gov (United States)

    Albers, David James

    Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional

  1. Progress in high-dimensional percolation and random graphs

    CERN Document Server

    Heydenreich, Markus

    2017-01-01

    This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic.  The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation.  Part III, consist...

  2. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  3. On the Zeeman Effect in highly excited atoms: 2. Three-dimensional case

    International Nuclear Information System (INIS)

    Baseia, B.; Medeiros e Silva Filho, J.

    1984-01-01

    A previous result, found in two-dimensional hydrogen-atoms, is extended to the three-dimensional case. A mapping of a four-dimensional space R 4 onto R 3 , that establishes an equivalence between Coulomb and harmonic potentials, is used to show that the exact solution of the Zeeman effect in highly excited atoms, cannot be reached. (Author) [pt

  4. Characterization of highly anisotropic three-dimensionally nanostructured surfaces

    International Nuclear Information System (INIS)

    Schmidt, Daniel

    2014-01-01

    Generalized ellipsometry, a non-destructive optical characterization technique, is employed to determine geometrical structure parameters and anisotropic dielectric properties of highly spatially coherent three-dimensionally nanostructured thin films grown by glancing angle deposition. The (piecewise) homogeneous biaxial layer model approach is discussed, which can be universally applied to model the optical response of sculptured thin films with different geometries and from diverse materials, and structural parameters as well as effective optical properties of the nanostructured thin films are obtained. Alternative model approaches for slanted columnar thin films, anisotropic effective medium approximations based on the Bruggeman formalism, are presented, which deliver results comparable to the homogeneous biaxial layer approach and in addition provide film constituent volume fraction parameters as well as depolarization or shape factors. Advantages of these ellipsometry models are discussed on the example of metal slanted columnar thin films, which have been conformally coated with a thin passivating oxide layer by atomic layer deposition. Furthermore, the application of an effective medium approximation approach to in-situ growth monitoring of this anisotropic thin film functionalization process is presented. It was found that structural parameters determined with the presented optical model equivalents for slanted columnar thin films agree very well with scanning electron microscope image estimates. - Highlights: • Summary of optical model strategies for sculptured thin films with arbitrary geometries • Application of the rigorous anisotropic Bruggeman effective medium applications • In-situ growth monitoring of atomic layer deposition on biaxial metal slanted columnar thin film

  5. Effects of dependence in high-dimensional multiple testing problems

    Directory of Open Access Journals (Sweden)

    van de Wiel Mark A

    2008-02-01

    Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.

  6. Microfluidic engineered high cell density three-dimensional neural cultures

    Science.gov (United States)

    Cullen, D. Kacy; Vukasinovic, Jelena; Glezer, Ari; La Placa, Michelle C.

    2007-06-01

    Three-dimensional (3D) neural cultures with cells distributed throughout a thick, bioactive protein scaffold may better represent neurobiological phenomena than planar correlates lacking matrix support. Neural cells in vivo interact within a complex, multicellular environment with tightly coupled 3D cell-cell/cell-matrix interactions; however, thick 3D neural cultures at cell densities approaching that of brain rapidly decay, presumably due to diffusion limited interstitial mass transport. To address this issue, we have developed a novel perfusion platform that utilizes forced intercellular convection to enhance mass transport. First, we demonstrated that in thick (>500 µm) 3D neural cultures supported by passive diffusion, cell densities =104 cells mm-3), continuous medium perfusion at 2.0-11.0 µL min-1 improved viability compared to non-perfused cultures (p death and matrix degradation. In perfused cultures, survival was dependent on proximity to the perfusion source at 2.00-6.25 µL min-1 (p 90% viability in both neuronal cultures and neuronal-astrocytic co-cultures. This work demonstrates the utility of forced interstitial convection in improving the survival of high cell density 3D engineered neural constructs and may aid in the development of novel tissue-engineered systems reconstituting 3D cell-cell/cell-matrix interactions.

  7. Inference for High-dimensional Differential Correlation Matrices.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  8. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  9. The literary uses of high-dimensional space

    Directory of Open Access Journals (Sweden)

    Ted Underwood

    2015-12-01

    Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.

  10. Methodology to unmix spectrally similar minerals using high order derivative spectra

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available pure vanilla extract milk Table: Chocolate cake ingredients Debba (CSIR) Unmixing spectrally similar minerals Rhodes University 2009 8 / 40 Introduction to Unmixing Ingredients Quantity unsweetened chocolate 120 grams unsweetened cocoa powder 28... grams boiling water 240 ml flour 315 grams baking powder 2 teaspoons baking soda 1 teaspoon salt 1/4 teaspoon unsalted butter 226 grams white sugar 400 grams eggs 3 large pure vanilla extract 2 teaspoons milk 240 ml Table: Chocolate cake...

  11. Self-similarity in high Atwood number Rayleigh-Taylor experiments

    Science.gov (United States)

    Mikhaeil, Mark; Suchandra, Prasoon; Pathikonda, Gokul; Ranjan, Devesh

    2017-11-01

    Self-similarity is a critical concept in turbulent and mixing flows. In the Rayleigh-Taylor instability, theory and simulations have shown that the flow exhibits properties of self-similarity as the mixing Reynolds number exceeds 20000 and the flow enters the turbulent regime. Here, we present results from the first large Atwood number (0.7) Rayleigh-Taylor experimental campaign for mixing Reynolds number beyond 20000 in an effort to characterize the self-similar nature of the instability. Experiments are performed in a statistically steady gas tunnel facility, allowing for the evaluation of turbulence statistics. A visualization diagnostic is used to study the evolution of the mixing width as the instability grows. This allows for computation of the instability growth rate. For the first time in such a facility, stereoscopic particle image velocimetry is used to resolve three-component velocity information in a plane. Velocity means, fluctuations, and correlations are considered as well as their appropriate scaling. Probability density functions of velocity fields, energy spectra, and higher-order statistics are also presented. The energy budget of the flow is described, including the ratio of the kinetic energy to the released potential energy. This work was supported by the DOE-NNSA SSAA Grant DE-NA0002922.

  12. Quality and efficiency in high dimensional Nearest neighbor search

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2009-01-01

    Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.

  13. Three-dimensional laparoscopy vs 2-dimensional laparoscopy with high-definition technology for abdominal surgery

    DEFF Research Database (Denmark)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian

    2017-01-01

    BACKGROUND: This systematic review investigates newer generation 3-dimensional (3D) laparoscopy vs 2-dimensional (2D) laparoscopy in terms of error rating, performance time, and subjective assessment as early comparisons have shown contradictory results due to technological shortcomings. DATA...... Central Register of Controlled Trials database. CONCLUSIONS: Of 643 articles, 13 RCTs were included, of which 2 were clinical trials. Nine of 13 trials (69%) and 10 of 13 trials (77%) found a significant reduction in performance time and error, respectively, with the use of 3D-laparoscopy. Overall, 3D......-laparoscopy was found to be superior or equal to 2D-laparoscopy. All trials featuring subjective evaluation found a superiority of 3D-laparoscopy. More clinical RCTs are still awaited for the convincing results to be reproduced....

  14. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  15. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  16. Dimensionality analysis of multiparticle production at high energies

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1989-01-01

    An algorithm of analysis of multiparticle final states is offered. By the Renyi dimensionalities, which were calculated according to experimental data, though it were hadron distribution over the rapidity intervals or particle distribution in an N-dimensional momentum space, we can judge about the degree of correlation of particles, separate the momentum space projections and areas where the probability measure singularities are observed. The method is tested in a series of calculations with samples of fractal object points and with samples obtained by means of different generators of pseudo- and quasi-random numbers. 27 refs.; 11 figs

  17. Problems of high temperature superconductivity in three-dimensional systems

    Energy Technology Data Exchange (ETDEWEB)

    Geilikman, B T

    1973-01-01

    A review is given of more recent papers on this subject. These papers have dealt mainly with two-dimensional systems. The present paper extends the treatment to three-dimensional systems, under the following headings: systems with collective electrons of one group and localized electrons of another group (compounds of metals with non-metals-dielectrics, organic substances, undoped semiconductors, molecular crystals); experimental investigations of superconducting compounds of metals with organic compounds, dielectrics, semiconductors, and semi-metals; and systems with two or more groups of collective electrons. Mechanics are considered and models are derived. 86 references.

  18. High similarity in physicochemical properties of chitin and chitosan from nymphs and adults of a grasshopper.

    Science.gov (United States)

    Erdogan, Sevil; Kaya, Murat

    2016-08-01

    This is the first study to explain the differences in the physicochemical properties of chitin and chitosan obtained from the nymphs and adults of Dociostaurus maroccanus using the same method. Fourier transform infrared spectroscopy, thermogravimetric analysis and x-ray diffraction analysis results demonstrated that the chitins from both the adults and nymphs were in the α-form. The chitin contents of the adults (14%) and nymphs (12%) were of the same order of magnitude. The crystalline index values of chitins from the adult and nymph grasshoppers were 71% and 74%, respectively. Thermal stabilities of the chitins and chitosans from adult and nymph grasshoppers were close to each other. Both the adult (7.2kDa) and nymph (5.6kDa) chitosans had low molar masses. Environmental scanning electron microscopy revealed that the surface morphologies of both chitins consisted of nanofibers and nanopores together, and they were very similar to each other. Consequently, it was determined that the physicochemical properties of the chitins and chitosans from adults and nymphs of D. maroccanus were not very different, so it can be hypothesized that the development of the chitin structure in the nymph has almost been completed and the nymph chitin has the same characteristics as the adult. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Decoding the Divergent Subcellular Location of Two Highly Similar Paralogous LEA Proteins

    Directory of Open Access Journals (Sweden)

    Marie-Hélène Avelange-Macherel

    2018-05-01

    Full Text Available Many mitochondrial proteins are synthesized as precursors in the cytosol with an N-terminal mitochondrial targeting sequence (MTS which is cleaved off upon import. Although much is known about import mechanisms and MTS structural features, the variability of MTS still hampers robust sub-cellular software predictions. Here, we took advantage of two paralogous late embryogenesis abundant proteins (LEA from Arabidopsis with different subcellular locations to investigate structural determinants of mitochondrial import and gain insight into the evolution of the LEA genes. LEA38 and LEA2 are short proteins of the LEA_3 family, which are very similar along their whole sequence, but LEA38 is targeted to mitochondria while LEA2 is cytosolic. Differences in the N-terminal protein sequences were used to generate a series of mutated LEA2 which were expressed as GFP-fusion proteins in leaf protoplasts. By combining three types of mutation (substitution, charge inversion, and segment replacement, we were able to redirect the mutated LEA2 to mitochondria. Analysis of the effect of the mutations and determination of the LEA38 MTS cleavage site highlighted important structural features within and beyond the MTS. Overall, these results provide an explanation for the likely loss of mitochondrial location after duplication of the ancestral gene.

  20. Earliest Memories and Recent Memories of Highly Salient Events--Are They Similar?

    Science.gov (United States)

    Peterson, Carole; Fowler, Tania; Brandeau, Katherine M.

    2015-01-01

    Four- to 11-year-old children were interviewed about 2 different sorts of memories in the same home visit: recent memories of highly salient and stressful events--namely, injuries serious enough to require hospital emergency room treatment--and their earliest memories. Injury memories were scored for amount of unique information, completeness…

  1. Similarity of the leading contributions to the self-energy and the thermodynamics in two- and three-dimensional Fermi Liquids

    International Nuclear Information System (INIS)

    Coffey, D.; Bedell, K.S.

    1993-01-01

    We compare the self-energy and entropy of a two- and three-dimensional Fermi Liquids (FLs) using a model with a contact interaction between fermions. For a two-dimensional (2D) FL we find that there are T 2 contributions to the entropy from interactions separate from those due to the collective modes. These T 2 contributions arise from nonanalytic corrections to the real part of the self-energy and areanalogous to T 3 lnT contributions present in the entropy of a three-dimensional (3D) FL. The difference between the 2D and 3D results arises solely from the different phase space factors

  2. High-Throughput Gene Expression Profiles to Define Drug Similarity and Predict Compound Activity.

    Science.gov (United States)

    De Wolf, Hans; Cougnaud, Laure; Van Hoorde, Kirsten; De Bondt, An; Wegner, Joerg K; Ceulemans, Hugo; Göhlmann, Hinrich

    2018-04-01

    By adding biological information, beyond the chemical properties and desired effect of a compound, uncharted compound areas and connections can be explored. In this study, we add transcriptional information for 31K compounds of Janssen's primary screening deck, using the HT L1000 platform and assess (a) the transcriptional connection score for generating compound similarities, (b) machine learning algorithms for generating target activity predictions, and (c) the scaffold hopping potential of the resulting hits. We demonstrate that the transcriptional connection score is best computed from the significant genes only and should be interpreted within its confidence interval for which we provide the stats. These guidelines help to reduce noise, increase reproducibility, and enable the separation of specific and promiscuous compounds. The added value of machine learning is demonstrated for the NR3C1 and HSP90 targets. Support Vector Machine models yielded balanced accuracy values ≥80% when the expression values from DDIT4 & SERPINE1 and TMEM97 & SPR were used to predict the NR3C1 and HSP90 activity, respectively. Combining both models resulted in 22 new and confirmed HSP90-independent NR3C1 inhibitors, providing two scaffolds (i.e., pyrimidine and pyrazolo-pyrimidine), which could potentially be of interest in the treatment of depression (i.e., inhibiting the glucocorticoid receptor (i.e., NR3C1), while leaving its chaperone, HSP90, unaffected). As such, the initial hit rate increased by a factor 300, as less, but more specific chemistry could be screened, based on the upfront computed activity predictions.

  3. Matrix correlations for high-dimensional data: The modified RV-coefficient

    NARCIS (Netherlands)

    Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van

    2009-01-01

    Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they

  4. TESTING HIGH-DIMENSIONAL COVARIANCE MATRICES, WITH APPLICATION TO DETECTING SCHIZOPHRENIA RISK GENES.

    Science.gov (United States)

    Zhu, Lingxue; Lei, Jing; Devlin, Bernie; Roeder, Kathryn

    2017-09-01

    Scientists routinely compare gene expression levels in cases versus controls in part to determine genes associated with a disease. Similarly, detecting case-control differences in co-expression among genes can be critical to understanding complex human diseases; however statistical methods have been limited by the high dimensional nature of this problem. In this paper, we construct a sparse-Leading-Eigenvalue-Driven (sLED) test for comparing two high-dimensional covariance matrices. By focusing on the spectrum of the differential matrix, sLED provides a novel perspective that accommodates what we assume to be common, namely sparse and weak signals in gene expression data, and it is closely related with Sparse Principal Component Analysis. We prove that sLED achieves full power asymptotically under mild assumptions, and simulation studies verify that it outperforms other existing procedures under many biologically plausible scenarios. Applying sLED to the largest gene-expression dataset obtained from post-mortem brain tissue from Schizophrenia patients and controls, we provide a novel list of genes implicated in Schizophrenia and reveal intriguing patterns in gene co-expression change for Schizophrenia subjects. We also illustrate that sLED can be generalized to compare other gene-gene "relationship" matrices that are of practical interest, such as the weighted adjacency matrices.

  5. Apparent mineral retention is similar in control and hyperinsulinemic men after consumption of high amylose cornstarch.

    Science.gov (United States)

    Behall, Kay M; Howe, Juliette C; Anderson, Richard A

    2002-07-01

    The effects on apparent mineral retention after long-term consumption of a high amylose diet containing 30 g resistant starch (RS) were investigated in 10 control and 14 hyperinsulinemic men. Subjects consumed products (bread, muffins, cookies, corn flakes and cheese puffs) made with standard (70% amylopectin, 30% amylose; AP) or high amylose (70% amylose, 30% amylopectin; AM) cornstarch for two 14-wk periods in a crossover pattern. Starch products replaced usual starches in the habitual diet for 10 wk followed by 4 wk of consuming the controlled diets. During wk 12, all urine, feces and duplicate foods were collected for 7 d. Urinary chromium losses after a glucose tolerance test or 24-h collections of the hyperinsulinemic and control subjects did not differ and were not altered by diet. Except for zinc, the two subject types did not differ significantly in apparent mineral balance. Apparent retentions of calcium and magnesium were not significantly affected by diet (AM vs. AP) or type-by-diet interaction. Apparent iron retention tended to be greater after AM than AP consumption (P copper retention was greater after consuming AP than after AM (P < 0.02), whereas apparent zinc retention was greater after consuming AM than after AP (P < 0.018). Zinc also showed a significant type-by-diet interaction (P < 0.034) with control subjects retaining less zinc after consuming AP than after AM. In summary, a high amylose cornstarch diet containing 30 g RS could be consumed long term without markedly affecting, and possibly enhancing, retention of some minerals.

  6. Single nucleus genome sequencing reveals high similarity among nuclei of an endomycorrhizal fungus.

    Directory of Open Access Journals (Sweden)

    Kui Lin

    2014-01-01

    Full Text Available Nuclei of arbuscular endomycorrhizal fungi have been described as highly diverse due to their asexual nature and absence of a single cell stage with only one nucleus. This has raised fundamental questions concerning speciation, selection and transmission of the genetic make-up to next generations. Although this concept has become textbook knowledge, it is only based on studying a few loci, including 45S rDNA. To provide a more comprehensive insight into the genetic makeup of arbuscular endomycorrhizal fungi, we applied de novo genome sequencing of individual nuclei of Rhizophagus irregularis. This revealed a surprisingly low level of polymorphism between nuclei. In contrast, within a nucleus, the 45S rDNA repeat unit turned out to be highly diverged. This finding demystifies a long-lasting hypothesis on the complex genetic makeup of arbuscular endomycorrhizal fungi. Subsequent genome assembly resulted in the first draft reference genome sequence of an arbuscular endomycorrhizal fungus. Its length is 141 Mbps, representing over 27,000 protein-coding gene models. We used the genomic sequence to reinvestigate the phylogenetic relationships of Rhizophagus irregularis with other fungal phyla. This unambiguously demonstrated that Glomeromycota are more closely related to Mucoromycotina than to its postulated sister Dikarya.

  7. Vocal neighbour-mate discrimination in female great tits despite high song similarity

    DEFF Research Database (Denmark)

    Blumenrath, Sandra H.; Dabelsteen, Torben; Pedersen, Simon Boel

    2007-01-01

    Discrimination between conspecifics is important in mediating social interactions between several individuals in a network environment. In great tits, Parus major, females readily distinguish between the songs of their mate and those of a stranger. The high degree of song sharing among neighbouring...... males, however, raises the question of whether females are also able to perceive differences between songs shared by their mate and a neighbour. The great tit is a socially monogamous, hole-nesting species with biparental care. Pair bond maintenance and coordination of the pair's reproductive efforts...... are important, and the female's ability to recognize her mate's song should therefore be adaptive. In a neighbour-mate discrimination playback experiment, we presented 13 incubating great tit females situated inside nestboxes with a song of their mate and the same song type from a neighbour. Each female...

  8. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    Science.gov (United States)

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R

  9. AN EFFECTIVE MULTI-CLUSTERING ANONYMIZATION APPROACH USING DISCRETE COMPONENT TASK FOR NON-BINARY HIGH DIMENSIONAL DATA SPACES

    Directory of Open Access Journals (Sweden)

    L.V. Arun Shalin

    2016-01-01

    Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization

  10. Cortical cytasters: a highly conserved developmental trait of Bilateria with similarities to Ctenophora

    Directory of Open Access Journals (Sweden)

    Salinas-Saavedra Miguel

    2011-12-01

    Full Text Available Abstract Background Cytasters (cytoplasmic asters are centriole-based nucleation centers of microtubule polymerization that are observable in large numbers in the cortical cytoplasm of the egg and zygote of bilaterian organisms. In both protostome and deuterostome taxa, cytasters have been described to develop during oogenesis from vesicles of nuclear membrane that move to the cortical cytoplasm. They become associated with several cytoplasmic components, and participate in the reorganization of cortical cytoplasm after fertilization, patterning the antero-posterior and dorso-ventral body axes. Presentation of the hypothesis The specific resemblances in the development of cytasters in both protostome and deuterostome taxa suggest that an independent evolutionary origin is unlikely. An assessment of published data confirms that cytasters are present in several protostome and deuterostome phyla, but are absent in the non-bilaterian phyla Cnidaria and Ctenophora. We hypothesize that cytasters evolved in the lineage leading to Bilateria and were already present in the most recent common ancestor shared by protostomes and deuterostomes. Thus, cytasters would be an ancient and highly conserved trait that is homologous across the different bilaterian phyla. The alternative possibility is homoplasy, that is cytasters have evolved independently in different lineages of Bilateria. Testing the hypothesis So far, available published information shows that appropriate observations have been made in eight different bilaterian phyla. All of them present cytasters. This is consistent with the hypothesis of homology and conservation. However, there are several important groups for which there are no currently available data. The hypothesis of homology predicts that cytasters should be present in these groups. Increasing the taxonomic sample using modern techniques uniformly will test for evolutionary patterns supporting homology, homoplasy, or secondary loss of

  11. Dimensional consistency achieved in high-performance synchronizing hubs

    International Nuclear Information System (INIS)

    Garcia, P.; Campos, M.; Torralba, M.

    2013-01-01

    The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via ( 2 P2S ) has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and process. (Author) 21 refs.

  12. Two-dimensional impurity transport calculations for a high recycling divertor

    International Nuclear Information System (INIS)

    Brooks, J.N.

    1986-04-01

    Two dimensional analysis of impurity transport in a high recycling divertor shows asymmetric particle fluxes to the divertor plate, low helium pumping efficiency, and high scrapeoff zone shielding for sputtered impurities

  13. On High Dimensional Searching Spaces and Learning Methods

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz

    2017-01-01

    , and similarity functions and discuss the pros and cons of using each of them. Conventional similarity functions evaluate objects in the vector space. Contrarily, Weighted Feature Distance (WFD) functions compare data objects in both feature and vector spaces, preventing the system from being affected by some...

  14. HDclassif : An R Package for Model-Based Clustering and Discriminant Analysis of High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Laurent Berge

    2012-01-01

    Full Text Available This paper presents the R package HDclassif which is devoted to the clustering and the discriminant analysis of high-dimensional data. The classification methods proposed in the package result from a new parametrization of the Gaussian mixture model which combines the idea of dimension reduction and model constraints on the covariance matrices. The supervised classification method using this parametrization is called high dimensional discriminant analysis (HDDA. In a similar manner, the associated clustering method iscalled high dimensional data clustering (HDDC and uses the expectation-maximization algorithm for inference. In order to correctly t the data, both methods estimate the specific subspace and the intrinsic dimension of the groups. Due to the constraints on the covariance matrices, the number of parameters to estimate is significantly lower than other model-based methods and this allows the methods to be stable and efficient in high dimensions. Two introductory examples illustrated with R codes allow the user to discover the hdda and hddc functions. Experiments on simulated and real datasets also compare HDDC and HDDA with existing classification methods on high-dimensional datasets. HDclassif is a free software and distributed under the general public license, as part of the R software project.

  15. Similar sediment provenance of low and high arsenic aquifers in Bangladesh

    Science.gov (United States)

    Zheng, Y.; Yang, Q.; Li, S.; Hemming, S. R.; Zhang, Y.; Rasbury, T.; Hemming, G.

    2017-12-01

    Geogenic arsenic (As) in drinking water, especially in groundwater, is estimated to have affected the health of over 100 million people worldwide, with nearly half of the total at risk population in Bangladesh. Sluggish flow and reducing biogeochemical environment in sedimentary aquifers have been shown as the primary controls for the release of As from sediment to the shallower groundwater in the Holocene aquifer. In contrast, deeper groundwater in the Pleistocene aquifer is depleted in groundwater As and sediment-extractable As. This study assesses the origin of the sediment in two aquifers of Bangladesh that contain distinctly different As levels to ascertain whether the source of the sediment is a factor in this difference through measurements of detrital mica Ar-Ar age, detrital zircon U-Pb age, as well as sediment silicate Sr and Nd isotopes. Whole rock geochemical data were also used to illuminate the extent of chemical weathering. Detrital mica 40Ar/39Ar cooling ages and detrital zircon U-Pb ages show no statistical difference between high-As Holocene sediment and low-As Pleistocene sediment, but suggest an aquifer sediment source of both the Brahmaputra and the Ganges rivers. Silicate 87Sr/86Sr and 143Nd/144Nd further depict a major sediment source from the Brahmaputra river, which is supported by a two end member mixing model using 87Sr/86Sr and Sr concentrations. Pleistocene and Holocene sediments show little difference in weathering of mobile elements including As, while coarser sediments and a longer history of the Pleistocene aquifer suggest that sorting and flushing play more important roles in regulating the contrast of As occurrence between these two aquifers.

  16. Dimensional consistency achieved in high-performance synchronizing hubs

    Directory of Open Access Journals (Sweden)

    García, P.

    2013-02-01

    Full Text Available The tolerances of parts produced for the automotive industry are so tight that any small process variation may mean that the product does not fulfill them. As dimensional tolerances decrease, the material properties of parts are expected to be improved. Depending on the dimensional and material requirements of a part, different production routes are available to find robust processes, minimizing cost and maximizing process capability. Dimensional tolerances have been reduced in recent years, and as a result, the double pressing-double sintering production via (“2P2S” has again become an accurate way to meet these increasingly narrow tolerances. In this paper, it is shown that the process parameters of the first sintering have great influence on the following production steps and the dimensions of the final parts. The roles of factors other than density and the second sintering process in defining the final dimensions of product are probed. All trials were done in a production line that produces synchronizer hubs for manual transmissions, allowing the maintenance of stable conditions and control of those parameters that are relevant for the product and process.

    Las tolerancias en componentes fabricados para la industria del automóvil son tan estrechas que cualquier modificación en las variables del proceso puede provocar que no se cumplan. Una disminución de las tolerancias dimensionales, puede significar una mejora en las propiedades de las piezas. Dependiendo de los requerimientos dimensionales y del material, distintas rutas de procesado pueden seguirse para encontrar un método de procesado robusto, que minimice costes y maximice la capacidad del proceso. En los últimos años, la tolerancia dimensional se ha ajustado gracias a métodos de procesado como el doble prensado/doble sinterizado (“2P2S”, método de gran precisión para conseguir estrechas tolerancias. En este trabajo, se muestra que los parámetros de procesado

  17. Interface between path and orbital angular momentum entanglement for high-dimensional photonic quantum information.

    Science.gov (United States)

    Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton

    2014-07-30

    Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.

  18. A tale of two pectins: Diverse fine structures can result from identical processive PME treatments on similar high DM subtrates

    Science.gov (United States)

    The effects of a processive pectin-methylesterase treatment on two different pectins, both possessing a high degree of methylesterification, were investigated. While the starting samples were purportedly very similar in fine structure, and even though the sample-averaged degree of methylesterificati...

  19. Differences and similarities in double special educational needs: high abilities/giftedness x Asperger’s Syndrome

    Directory of Open Access Journals (Sweden)

    Nara Joyce Wellausen Vieira

    2012-08-01

    Full Text Available The study was developed from a literature search in books, articles and theses that have been published since the year 2000 on the theme High Abilities / Giftedness and Asperger’s Syndrome. The objectives of this research were to conduct a search on publications from 2000 to 2011, about the common and different features to the person with Asperger syndrome and high ability gifted, and also relate the number of publications found in Education and Special Education. At theoretical we present the conception of High Abilities / Giftedness of Renzulli (2004 and Gardner (2000 and in the conception of Asperger Syndrome, Mello (2007 and Klin (2006. When analyzing the data, were perceived similarities and differences between the behavioral characteristics of individuals with High Abilities / Giftedness and those with Asperger’s Syndrome. It’s possible point out that there is much evidence that separate these two special educational needs and few similarities between them. But do not neglect that there may be a dual disability between these two particular special educational needs, because there are still few studies that verify theoretically the differences and similarities of these subjects, much less those that investigate these similarities and distinctions in the subjects themselves.

  20. Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids

    NARCIS (Netherlands)

    bin Zubair, H.; Oosterlee, C.E.; Wienands, R.

    2006-01-01

    This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We

  1. Turbulence, dynamic similarity and scale effects in high-velocity free-surface flows above a stepped chute

    Science.gov (United States)

    Felder, Stefan; Chanson, Hubert

    2009-07-01

    In high-velocity free-surface flows, air entrainment is common through the interface, and intense interactions take place between turbulent structures and entrained bubbles. Two-phase flow properties were measured herein in high-velocity open channel flows above a stepped chute. Detailed turbulence measurements were conducted in a large-size facility, and a comparative analysis was applied to test the validity of the Froude and Reynolds similarities. The results showed consistently that the Froude similitude was not satisfied using a 2:1 geometric scaling ratio. Lesser number of entrained bubbles and comparatively greater bubble sizes were observed at the smaller Reynolds numbers, as well as lower turbulence levels and larger turbulent length and time scales. The results implied that small-size models did underestimate the rate of energy dissipation and the aeration efficiency of prototype stepped spillways for similar flow conditions. Similarly a Reynolds similitude was tested. The results showed also some significant scale effects. However a number of self-similar relationships remained invariant under changes of scale and confirmed the analysis of Chanson and Carosi (Exp Fluids 42:385-401, 2007). The finding is significant because self-similarity may provide a picture general enough to be used to characterise the air-water flow field in large prototype channels.

  2. Self-similarity of high-pT hadron production in π-p and π- A collisions

    International Nuclear Information System (INIS)

    Tokarev, M.V.; Panebrattsev, Yu.A.; Skoro, G.P.; Zborovsky, I.

    2002-01-01

    Self-similar properties of hadron production in π - p and π - A collisions over a high-p T region are studied. The analysis if experimental data is performed in the framework of z-scaling. The scaling variable depends on the anomalous fractal dimension of the incoming pion. Its value is found to be δ π ≅ 0.1. Independence of the scaling function Ψ(z) on the collision energy is shown. A-dependence of data z-presentation confirms self-similarity of particle formation in πA collisions

  3. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  4. Two-dimensional computer simulation of high intensity proton beams

    CERN Document Server

    Lapostolle, Pierre M

    1972-01-01

    A computer program has been developed which simulates the two- dimensional transverse behaviour of a proton beam in a focusing channel. The model is represented by an assembly of a few thousand 'superparticles' acted upon by their own self-consistent electric field and an external focusing force. The evolution of the system is computed stepwise in time by successively solving Poisson's equation and Newton's law of motion. Fast Fourier transform techniques are used for speed in the solution of Poisson's equation, while extensive area weighting is utilized for the accurate evaluation of electric field components. A computer experiment has been performed on the CERN CDC 6600 computer to study the nonlinear behaviour of an intense beam in phase space, showing under certain circumstances a filamentation due to space charge and an apparent emittance growth. (14 refs).

  5. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  6. Aspects of high-dimensional theories in embedding spaces

    International Nuclear Information System (INIS)

    Maia, M.D.; Mecklenburg, W.

    1983-01-01

    The question of whether physical meaning may be attributed to the extra dimensions provided by embedding procedures as applied to physical space-times is discussed. The similarities and differences of the present picture to that of conventional Kaluza-Klein pictures are commented. (Author) [pt

  7. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    OpenAIRE

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data)...

  8. Mitigating the Insider Threat Using High-Dimensional Search and Modeling

    National Research Council Canada - National Science Library

    Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago

    2006-01-01

    In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...

  9. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)

    2015-01-15

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  10. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data

  11. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    Science.gov (United States)

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  12. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2010-01-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii

  13. Distribution of high-dimensional entanglement via an intra-city free-space link.

    Science.gov (United States)

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-07-24

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.

  14. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  15. High-power Yb-fiber comb based on pre-chirped-management self-similar amplification

    Science.gov (United States)

    Luo, Daping; Liu, Yang; Gu, Chenglin; Wang, Chao; Zhu, Zhiwei; Zhang, Wenchao; Deng, Zejiang; Zhou, Lian; Li, Wenxue; Zeng, Heping

    2018-02-01

    We report a fiber self-similar-amplification (SSA) comb system that delivers a 250-MHz, 109-W, 42-fs pulse train with a 10-dB spectral width of 85 nm at 1056 nm. A pair of grisms is employed to compensate the group velocity dispersion and third-order dispersion of pre-amplified pulses for facilitating a self-similar evolution and a self-phase modulation (SPM). Moreover, we analyze the stabilities and noise characteristics of both the locked carrier envelope phase and the repetition rate, verifying the stability of the generated high-power comb. The demonstration of the SSA comb at such high power proves the feasibility of the SPM-based low-noise ultrashort comb.

  16. Two dimensional simulation of high power laser-surface interaction

    International Nuclear Information System (INIS)

    Goldman, S.R.; Wilke, M.D.; Green, R.E.L.; Johnson, R.P.; Busch, G.E.

    1998-01-01

    For laser intensities in the range of 10 8 --10 9 W/cm 2 , and pulse lengths of order 10 microsec or longer, the authors have modified the inertial confinement fusion code Lasnex to simulate gaseous and some dense material aspects of the laser-matter interaction. The unique aspect of their treatment consists of an ablation model which defines a dense material-vapor interface and then calculates the mass flow across this interface. The model treats the dense material as a rigid two-dimensional mass and heat reservoir suppressing all hydrodynamic motion in the dense material. The computer simulations and additional post-processors provide predictions for measurements including impulse given to the target, pressures at the target interface, electron temperatures and densities in the vapor-plasma plume region, and emission of radiation from the target. The authors will present an analysis of some relatively well diagnosed experiments which have been useful in developing their modeling. The simulations match experimentally obtained target impulses, pressures at the target surface inside the laser spot, and radiation emission from the target to within about 20%. Hence their simulational technique appears to form a useful basis for further investigation of laser-surface interaction in this intensity, pulse-width range. This work is useful in many technical areas such as materials processing

  17. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  18. Three-dimensional structure of a Streptomyces sviceus GNAT acetyltransferase with similarity to the C-terminal domain of the human GH84 O-GlcNAcase

    International Nuclear Information System (INIS)

    He, Yuan; Roth, Christian; Turkenburg, Johan P.; Davies, Gideon J.

    2013-01-01

    The crystal structure of a bacterial acetyltransferase with 27% sequence identity to the C-terminal domain of human O-GlcNAcase has been solved at 1.5 Å resolution. This S. sviceus protein is compared with known GCN5-related acetyltransferases, adding to the diversity observed in this superfamily. The mammalian O-GlcNAc hydrolysing enzyme O-GlcNAcase (OGA) is a multi-domain protein with glycoside hydrolase activity in the N-terminus and with a C-terminal domain that has low sequence similarity to known acetyltransferases, prompting speculation, albeit controversial, that the C-terminal domain may function as a histone acetyltransferase (HAT). There are currently scarce data available regarding the structure and function of this C-terminal region. Here, a bacterial homologue of the human OGA C-terminal domain, an acetyltransferase protein (accession No. ZP-05014886) from Streptomyces sviceus (SsAT), was cloned and its crystal structure was solved to high resolution. The structure reveals a conserved protein core that has considerable structural homology to the acetyl-CoA (AcCoA) binding site of GCN5-related acetyltransferases (GNATs). Calorimetric data further confirm that SsAT is indeed able to bind AcCoA in solution with micromolar affinity. Detailed structural analysis provided insight into the binding of AcCoA. An acceptor-binding cavity was identified, indicating that the physiological substrate of SsAT may be a small molecule. Consistent with recently published work, the SsAT structure further questions a HAT function for the human OGA domain

  19. Three-dimensional structure of a Streptomyces sviceus GNAT acetyltransferase with similarity to the C-terminal domain of the human GH84 O-GlcNAcase

    Energy Technology Data Exchange (ETDEWEB)

    He, Yuan [Northwest University, Xi’an 710069 (China); The University of York, York YO10 5DD (United Kingdom); Roth, Christian; Turkenburg, Johan P.; Davies, Gideon J., E-mail: gideon.davies@york.ac.uk [The University of York, York YO10 5DD (United Kingdom); Northwest University, Xi’an 710069 (China)

    2014-01-01

    The crystal structure of a bacterial acetyltransferase with 27% sequence identity to the C-terminal domain of human O-GlcNAcase has been solved at 1.5 Å resolution. This S. sviceus protein is compared with known GCN5-related acetyltransferases, adding to the diversity observed in this superfamily. The mammalian O-GlcNAc hydrolysing enzyme O-GlcNAcase (OGA) is a multi-domain protein with glycoside hydrolase activity in the N-terminus and with a C-terminal domain that has low sequence similarity to known acetyltransferases, prompting speculation, albeit controversial, that the C-terminal domain may function as a histone acetyltransferase (HAT). There are currently scarce data available regarding the structure and function of this C-terminal region. Here, a bacterial homologue of the human OGA C-terminal domain, an acetyltransferase protein (accession No. ZP-05014886) from Streptomyces sviceus (SsAT), was cloned and its crystal structure was solved to high resolution. The structure reveals a conserved protein core that has considerable structural homology to the acetyl-CoA (AcCoA) binding site of GCN5-related acetyltransferases (GNATs). Calorimetric data further confirm that SsAT is indeed able to bind AcCoA in solution with micromolar affinity. Detailed structural analysis provided insight into the binding of AcCoA. An acceptor-binding cavity was identified, indicating that the physiological substrate of SsAT may be a small molecule. Consistent with recently published work, the SsAT structure further questions a HAT function for the human OGA domain.

  20. High-dimensional orbital angular momentum entanglement concentration based on Laguerre–Gaussian mode selection

    International Nuclear Information System (INIS)

    Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang

    2013-01-01

    Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)

  1. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    Science.gov (United States)

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  2. High-functioning autism patients share similar but more severe impairments in verbal theory of mind than schizophrenia patients.

    Science.gov (United States)

    Tin, L N W; Lui, S S Y; Ho, K K Y; Hung, K S Y; Wang, Y; Yeung, H K H; Wong, T Y; Lam, S M; Chan, R C K; Cheung, E F C

    2018-06-01

    Evidence suggests that autism and schizophrenia share similarities in genetic, neuropsychological and behavioural aspects. Although both disorders are associated with theory of mind (ToM) impairments, a few studies have directly compared ToM between autism patients and schizophrenia patients. This study aimed to investigate to what extent high-functioning autism patients and schizophrenia patients share and differ in ToM performance. Thirty high-functioning autism patients, 30 schizophrenia patients and 30 healthy individuals were recruited. Participants were matched in age, gender and estimated intelligence quotient. The verbal-based Faux Pas Task and the visual-based Yoni Task were utilised to examine first- and higher-order, affective and cognitive ToM. The task/item difficulty of two paradigms was examined using mixed model analyses of variance (ANOVAs). Multiple ANOVAs and mixed model ANOVAs were used to examine group differences in ToM. The Faux Pas Task was more difficult than the Yoni Task. High-functioning autism patients showed more severely impaired verbal-based ToM in the Faux Pas Task, but shared similar visual-based ToM impairments in the Yoni Task with schizophrenia patients. The findings that individuals with high-functioning autism shared similar but more severe impairments in verbal ToM than individuals with schizophrenia support the autism-schizophrenia continuum. The finding that verbal-based but not visual-based ToM was more impaired in high-functioning autism patients than schizophrenia patients could be attributable to the varied task/item difficulty between the two paradigms.

  3. A SNP based high-density linkage map of Apis cerana reveals a high recombination rate similar to Apis mellifera.

    Directory of Open Access Journals (Sweden)

    Yuan Yuan Shi

    Full Text Available BACKGROUND: The Eastern honey bee, Apis cerana Fabricius, is distributed in southern and eastern Asia, from India and China to Korea and Japan and southeast to the Moluccas. This species is also widely kept for honey production besides Apis mellifera. Apis cerana is also a model organism for studying social behavior, caste determination, mating biology, sexual selection, and host-parasite interactions. Few resources are available for molecular research in this species, and a linkage map was never constructed. A linkage map is a prerequisite for quantitative trait loci mapping and for analyzing genome structure. We used the Chinese honey bee, Apis cerana cerana to construct the first linkage map in the Eastern honey bee. RESULTS: F2 workers (N = 103 were genotyped for 126,990 single nucleotide polymorphisms (SNPs. After filtering low quality and those not passing the Mendel test, we obtained 3,000 SNPs, 1,535 of these were informative and used to construct a linkage map. The preliminary map contains 19 linkage groups, we then mapped the 19 linkage groups to 16 chromosomes by comparing the markers to the genome of A. mellfiera. The final map contains 16 linkage groups with a total of 1,535 markers. The total genetic distance is 3,942.7 centimorgans (cM with the largest linkage group (180 loci measuring 574.5 cM. Average marker interval for all markers across the 16 linkage groups is 2.6 cM. CONCLUSION: We constructed a high density linkage map for A. c. cerana with 1,535 markers. Because the map is based on SNP markers, it will enable easier and faster genotyping assays than randomly amplified polymorphic DNA or microsatellite based maps used in A. mellifera.

  4. Two-Dimensional High Definition Versus Three-Dimensional Endoscopy in Endonasal Skull Base Surgery: A Comparative Preclinical Study.

    Science.gov (United States)

    Rampinelli, Vittorio; Doglietto, Francesco; Mattavelli, Davide; Qiu, Jimmy; Raffetti, Elena; Schreiber, Alberto; Villaret, Andrea Bolzoni; Kucharczyk, Walter; Donato, Francesco; Fontanella, Marco Maria; Nicolai, Piero

    2017-09-01

    Three-dimensional (3D) endoscopy has been recently introduced in endonasal skull base surgery. Only a relatively limited number of studies have compared it to 2-dimensional, high definition technology. The objective was to compare, in a preclinical setting for endonasal endoscopic surgery, the surgical maneuverability of 2-dimensional, high definition and 3D endoscopy. A group of 68 volunteers, novice and experienced surgeons, were asked to perform 2 tasks, namely simulating grasping and dissection surgical maneuvers, in a model of the nasal cavities. Time to complete the tasks was recorded. A questionnaire to investigate subjective feelings during tasks was filled by each participant. In 25 subjects, the surgeons' movements were continuously tracked by a magnetic-based neuronavigator coupled with dedicated software (ApproachViewer, part of GTx-UHN) and the recorded trajectories were analyzed by comparing jitter, sum of square differences, and funnel index. Total execution time was significantly lower with 3D technology (P < 0.05) in beginners and experts. Questionnaires showed that beginners preferred 3D endoscopy more frequently than experts. A minority (14%) of beginners experienced discomfort with 3D endoscopy. Analysis of jitter showed a trend toward increased effectiveness of surgical maneuvers with 3D endoscopy. Sum of square differences and funnel index analyses documented better values with 3D endoscopy in experts. In a preclinical setting for endonasal skull base surgery, 3D technology appears to confer an advantage in terms of time of execution and precision of surgical maneuvers. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Low cognitive load strengthens distractor interference while high load attenuates when cognitive load and distractor possess similar visual characteristics.

    Science.gov (United States)

    Minamoto, Takehiro; Shipstead, Zach; Osaka, Naoyuki; Engle, Randall W

    2015-07-01

    Studies on visual cognitive load have reported inconsistent effects of distractor interference when distractors have visual characteristic that are similar to the cognitive load. Some studies have shown that the cognitive load enhances distractor interference, while others reported an attenuating effect. We attribute these inconsistencies to the amount of cognitive load that a person is required to maintain. Lower amounts of cognitive load increase distractor interference by orienting attention toward visually similar distractors. Higher amounts of cognitive load attenuate distractor interference by depleting attentional resources needed to process distractors. In the present study, cognitive load consisted of faces (Experiments 1-3) or scenes (Experiment 2). Participants performed a selective attention task in which they ignored face distractors while judging a color of a target dot presented nearby, under differing amounts of load. Across these experiments distractor interference was greater in the low-load condition and smaller in the high-load condition when the content of the cognitive load had similar visual characteristic to the distractors. We also found that when a series of judgments needed to be made, the effect was apparent for the first trial but not for the second. We further tested an involvement of working memory capacity (WMC) in the load effect (Experiment 3). Interestingly, both high and low WMC groups received an equivalent effect of the cognitive load in the first distractor, suggesting these effects are fairly automatic.

  6. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  7. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    Science.gov (United States)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  8. Innovation Rather than Improvement: A Solvable High-Dimensional Model Highlights the Limitations of Scalar Fitness

    Science.gov (United States)

    Tikhonov, Mikhail; Monasson, Remi

    2018-01-01

    Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.

  9. Carbon doped GaAs/AlGaAs heterostructures with high mobility two dimensional hole gas

    Energy Technology Data Exchange (ETDEWEB)

    Hirmer, Marika; Bougeard, Dominique; Schuh, Dieter [Institut fuer Experimentelle und Angewandte Physik, Universitaet Regensburg, D 93040 Regensburg (Germany); Wegscheider, Werner [Laboratorium fuer Festkoerperphysik, ETH Zuerich, 8093 Zuerich (Switzerland)

    2011-07-01

    Two dimensional hole gases (2DHG) with high carrier mobilities are required for both fundamental research and possible future ultrafast spintronic devices. Here, two different types of GaAs/AlGaAs heterostructures hosting a 2DHG were investigated. The first structure is a GaAs QW embedded in AlGaAs barrier grown by molecular beam epitaxy with carbon-doping only at one side of the quantum well (QW) (single side doped, ssd), while the second structure is similar but with symmetrically arranged doping layers on both sides of the QW (double side doped, dsd). The ssd-structure shows hole mobilities up to 1.2*10{sup 6} cm{sup 2}/Vs which are achieved after illumination. In contrast, the dsd-structure hosts a 2DHG with mobility up to 2.05*10{sup 6} cm{sup 2}/Vs. Here, carrier mobility and carrier density is not affected by illuminating the sample. Both samples showed distinct Shubnikov-de-Haas oscillations and fractional quantum-Hall-plateaus in magnetotransport experiments done at 20mK, indicating the high quality of the material. In addition, the influence of different temperature profiles during growth and the influence of the Al content of the barrier Al{sub x}Ga{sub 1-x}As on carrier concentration and mobility were investigated and are presented here.

  10. Multi-SOM: an Algorithm for High-Dimensional, Small Size Datasets

    Directory of Open Access Journals (Sweden)

    Shen Lu

    2013-04-01

    Full Text Available Since it takes time to do experiments in bioinformatics, biological datasets are sometimes small but with high dimensionality. From probability theory, in order to discover knowledge from a set of data, we have to have a sufficient number of samples. Otherwise, the error bounds can become too large to be useful. For the SOM (Self- Organizing Map algorithm, the initial map is based on the training data. In order to avoid the bias caused by the insufficient training data, in this paper we present an algorithm, called Multi-SOM. Multi-SOM builds a number of small self-organizing maps, instead of just one big map. Bayesian decision theory is used to make the final decision among similar neurons on different maps. In this way, we can better ensure that we can get a real random initial weight vector set, the map size is less of consideration and errors tend to average out. In our experiments as applied to microarray datasets which are highly intense data composed of genetic related information, the precision of Multi-SOMs is 10.58% greater than SOMs, and its recall is 11.07% greater than SOMs. Thus, the Multi-SOMs algorithm is practical.

  11. Gait in ducks (Anas platyrhynchos and chickens (Gallus gallus – similarities in adaptation to high growth rate

    Directory of Open Access Journals (Sweden)

    B. M. Duggan

    2016-08-01

    Full Text Available Genetic selection for increased growth rate and muscle mass in broiler chickens has been accompanied by mobility issues and poor gait. There are concerns that the Pekin duck, which is on a similar selection trajectory (for production traits to the broiler chicken, may encounter gait problems in the future. In order to understand how gait has been altered by selection, the walking ability of divergent lines of high- and low-growth chickens and ducks was objectively measured using a pressure platform, which recorded various components of their gait. In both species, lines which had been selected for large breast muscle mass moved at a slower velocity and with a greater step width than their lighter conspecifics. These high-growth lines also spent more time supported by two feet in order to improve balance when compared with their lighter, low-growth conspecifics. We demonstrate that chicken and duck lines which have been subjected to intense selection for high growth rates and meat yields have adapted their gait in similar ways. A greater understanding of which components of gait have been altered in selected lines with impaired walking ability may lead to more effective breeding strategies to improve gait in poultry.

  12. High counting rate, two-dimensional position sensitive timing RPC

    CERN Document Server

    Petrovici, M.; Simion, V; Bartos, D; Caragheorgheopol, G; Deppner, I; Adamczewski-Musch, J; Linev, S; Williams, MCS; Loizeau, P; Herrmann, N; Doroud, K; Radulescu, L; Constantin, F

    2012-01-01

    Resistive Plate Chambers (RPCs) are widely employed as muon trigger systems at the Large Hadron Collider (LHC) experiments. Their large detector volume and the use of a relatively expensive gas mixture make a closed-loop gas circulation unavoidable. The return gas of RPCs operated in conditions similar to the experimental background foreseen at LHC contains large amount of impurities potentially dangerous for long-term operation. Several gas-cleaning agents, characterized during the past years, are currently in use. New test allowed understanding of the properties and performance of a large number of purifiers. On that basis, an optimal combination of different filters consisting of Molecular Sieve (MS) 5Å and 4Å, and a Cu catalyst R11 has been chosen and validated irradiating a set of RPCs at the CERN Gamma Irradiation Facility (GIF) for several years. A very important feature of this new configuration is the increase of the cycle duration for each purifier, which results in better system stabilit...

  13. HASE: Framework for efficient high-dimensional association analyses

    NARCIS (Netherlands)

    G.V. Roshchupkin (Gennady); H.H.H. Adams (Hieab); M.W. Vernooij (Meike); A. Hofman (Albert); C.M. van Duijn (Cornelia); M.K. Ikram (Kamran); W.J. Niessen (Wiro)

    2016-01-01

    textabstractHigh-throughput technology can now provide rich information on a person's biological makeup and environmental surroundings. Important discoveries have been made by relating these data to various health outcomes in fields such as genomics, proteomics, and medical imaging. However,

  14. HASE : Framework for efficient high-dimensional association analyses

    NARCIS (Netherlands)

    Roshchupkin, G. V.; Adams, H; Vernooij, Meike W.; Hofman, A; Van Duijn, C. M.; Ikram, M. Arfan; Niessen, W.J.

    2016-01-01

    High-throughput technology can now provide rich information on a person's biological makeup and environmental surroundings. Important discoveries have been made by relating these data to various health outcomes in fields such as genomics, proteomics, and medical imaging. However,

  15. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    OpenAIRE

    Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša

    2014-01-01

    Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...

  16. Secure data storage by three-dimensional absorbers in highly scattering volume medium

    International Nuclear Information System (INIS)

    Matoba, Osamu; Matsuki, Shinichiro; Nitta, Kouichi

    2008-01-01

    A novel data storage in a volume medium with highly scattering coefficient is proposed for data security application. Three-dimensional absorbers are used as data. These absorbers can not be measured by interferometer when the scattering in a volume medium is strong enough. We present a method to reconstruct three-dimensional absorbers and present numerical results to show the effectiveness of the proposed data storage.

  17. High Speed Water Sterilization Using One-Dimensional Nanostructures

    KAUST Repository

    Schoen, David T.; Schoen, Alia P.; Hu, Liangbing; Kim, Han Sun; Heilshorn, Sarah C.; Cui, Yi

    2010-01-01

    The removal of bacteria and other organisms from water is an extremely important process, not only for drinking and sanitation but also industrially as biofouling is a commonplace and serious problem. We here present a textile based multiscale device for the high speed electrical sterilization of water using silver nanowires, carbon nanotubes, and cotton. This approach, which combines several materials spanning three very different length scales with simple dying based fabrication, makes a gravity fed device operating at 100000 L/(h m2) which can inactivate >98% of bacteria with only several seconds of total incubation time. This excellent performance is enabled by the use of an electrical mechanism rather than size exclusion, while the very high surface area of the device coupled with large electric field concentrations near the silver nanowire tips allows for effective bacterial inactivation. © 2010 American Chemical Society.

  18. High Speed Water Sterilization Using One-Dimensional Nanostructures

    KAUST Repository

    Schoen, David T.

    2010-09-08

    The removal of bacteria and other organisms from water is an extremely important process, not only for drinking and sanitation but also industrially as biofouling is a commonplace and serious problem. We here present a textile based multiscale device for the high speed electrical sterilization of water using silver nanowires, carbon nanotubes, and cotton. This approach, which combines several materials spanning three very different length scales with simple dying based fabrication, makes a gravity fed device operating at 100000 L/(h m2) which can inactivate >98% of bacteria with only several seconds of total incubation time. This excellent performance is enabled by the use of an electrical mechanism rather than size exclusion, while the very high surface area of the device coupled with large electric field concentrations near the silver nanowire tips allows for effective bacterial inactivation. © 2010 American Chemical Society.

  19. One-dimensional model for QCD at high energy

    International Nuclear Information System (INIS)

    Iancu, E.; Santana Amaral, J.T. de; Soyez, G.; Triantafyllopoulos, D.N.

    2007-01-01

    We propose a stochastic particle model in (1+1) dimensions, with one dimension corresponding to rapidity and the other one to the transverse size of a dipole in QCD, which mimics high-energy evolution and scattering in QCD in the presence of both saturation and particle-number fluctuations, and hence of pomeron loops. The model evolves via non-linear particle splitting, with a non-local splitting rate which is constrained by boost-invariance and multiple scattering. The splitting rate saturates at high density, so like the gluon emission rate in the JIMWLK evolution. In the mean field approximation obtained by ignoring fluctuations, the model exhibits the hallmarks of the BK equation, namely a BFKL-like evolution at low density, the formation of a traveling wave, and geometric scaling. In the full evolution including fluctuations, the geometric scaling is washed out at high energy and replaced by diffusive scaling. It is likely that the model belongs to the universality class of the reaction-diffusion process. The analysis of the model sheds new light on the pomeron loops equations in QCD and their possible improvements

  20. Engineering two-photon high-dimensional states through quantum interference

    Science.gov (United States)

    Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew

    2016-01-01

    Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685

  1. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  2. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  3. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  4. Metallic and highly conducting two-dimensional atomic arrays of sulfur enabled by molybdenum disulfide nanotemplate

    Science.gov (United States)

    Zhu, Shuze; Geng, Xiumei; Han, Yang; Benamara, Mourad; Chen, Liao; Li, Jingxiao; Bilgin, Ismail; Zhu, Hongli

    2017-10-01

    Element sulfur in nature is an insulating solid. While it has been tested that one-dimensional sulfur chain is metallic and conducting, the investigation on two-dimensional sulfur remains elusive. We report that molybdenum disulfide layers are able to serve as the nanotemplate to facilitate the formation of two-dimensional sulfur. Density functional theory calculations suggest that confined in-between layers of molybdenum disulfide, sulfur atoms are able to form two-dimensional triangular arrays that are highly metallic. As a result, these arrays contribute to the high conductivity and metallic phase of the hybrid structures of molybdenum disulfide layers and two-dimensional sulfur arrays. The experimentally measured conductivity of such hybrid structures reaches up to 223 S/m. Multiple experimental results, including X-ray photoelectron spectroscopy (XPS), transition electron microscope (TEM), selected area electron diffraction (SAED), agree with the computational insights. Due to the excellent conductivity, the current density is linearly proportional to the scan rate until 30,000 mV s-1 without the attendance of conductive additives. Using such hybrid structures as electrode, the two-electrode supercapacitor cells yield a power density of 106 Wh kg-1 and energy density 47.5 Wh kg-1 in ionic liquid electrolytes. Our findings offer new insights into using two-dimensional materials and their Van der Waals heterostructures as nanotemplates to pattern foreign atoms for unprecedented material properties.

  5. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-12-01

    Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  6. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    Science.gov (United States)

    Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2013-12-01

    Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  7. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    Science.gov (United States)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  8. Bayesian Inference of High-Dimensional Dynamical Ocean Models

    Science.gov (United States)

    Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.

    2015-12-01

    This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.

  9. Global communication schemes for the numerical solution of high-dimensional PDEs

    DEFF Research Database (Denmark)

    Hupp, Philipp; Heene, Mario; Jacob, Riko

    2016-01-01

    The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...

  10. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  11. Model-based Clustering of High-Dimensional Data in Astrophysics

    Science.gov (United States)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  12. Linear stability theory as an early warning sign for transitions in high dimensional complex systems

    International Nuclear Information System (INIS)

    Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft

    2016-01-01

    We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)

  13. High-dimensional atom localization via spontaneously generated coherence in a microwave-driven atomic system.

    Science.gov (United States)

    Wang, Zhiping; Chen, Jinyu; Yu, Benli

    2017-02-20

    We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.

  14. Highly similar prokaryotic communities of sunken wood at shallow and deep-sea sites across the oceans.

    Science.gov (United States)

    Palacios, Carmen; Zbinden, Magali; Pailleret, Marie; Gaill, Françoise; Lebaron, Philippe

    2009-11-01

    With an increased appreciation of the frequency of their occurrence, large organic falls such as sunken wood and whale carcasses have become important to consider in the ecology of the oceans. Organic-rich deep-sea falls may play a major role in the dispersal and evolution of chemoautotrophic communities at the ocean floor, and chemosynthetic symbiotic, free-living, and attached microorganisms may drive the primary production at these communities. However, little is known about the microbiota thriving in and around organic falls. Our aim was to investigate and compare free-living and attached communities of bacteria and archaea from artificially immersed and naturally sunken wood logs with varying characteristics at several sites in the deep sea and in shallow water to address basic questions on the microbial ecology of sunken wood. Multivariate indirect ordination analyses of capillary electrophoresis single-stranded conformation polymorphisms (CE-SSCP) fingerprinting profiles demonstrated high similarity of bacterial and archaeal assemblages present in timbers and logs situated at geographically distant sites and at different depths of immersion. This similarity implies that wood falls harbor a specialized microbiota as observed in other ecosystems when the same environmental conditions reoccur. Scanning and transmission electron microscopy observations combined with multivariate direct gradient analysis of Bacteria CE-SSCP profiles demonstrate that type of wood (hard vs. softwood), and time of immersion are important in structuring sunken wood bacterial communities. Archaeal populations were present only in samples with substantial signs of decay, which were also more similar in their bacterial assemblages, providing indirect evidence of temporal succession in the microbial communities that develop in and around wood falls.

  15. Interconnectedness during high water maintains similarity in fish assemblages of island floodplain lakes in the Amazonian Basin

    Directory of Open Access Journals (Sweden)

    Carlos Edwar de C. Freitas

    2010-01-01

    Full Text Available We conducted a study to test the hypothesis that interconnectedness among island floodplain lakes and the adjacent Solimões River during the flood stage of the hydrologic cycle is enough to maintain similarity in fish species assemblages. Gill net samples were collected during high and low water periods for three consecutive years (July 2004 to July 2006 in four lakes on Paciência Island. Two lakes, Piranha and Ressaca, are connected to the river all year, and the other two, Preto and Cacau, which are in the center of the island, are isolated during low water periods. The abundance, species richness and evenness of the fish assemblages in these lakes did not differ according to their relative positions or the season of the hydrological cycle, which confirmed our hypothesis. However, fish abundance during the dry season was greater than in the flood season. Apparently, the short period of full connection between the lakes is enough to allow the colonization of all fish species, but not to cause similar abundances. Our study indicates that persistence of the species composition of island floodplain lakes is primarily due to the annual replenishment of fish to the lakes during the flood season.

  16. Activity/inactivity circadian rhythm shows high similarities between young obesity-induced rats and old rats.

    Science.gov (United States)

    Bravo Santos, R; Delgado, J; Cubero, J; Franco, L; Ruiz-Moyano, S; Mesa, M; Rodríguez, A B; Uguz, C; Barriga, C

    2016-03-01

    The objective of the present study was to compare differences between elderly rats and young obesity-induced rats in their activity/inactivity circadian rhythm. The investigation was motivated by the differences reported previously for the circadian rhythms of both obese and elderly humans (and other animals), and those of healthy, young or mature individuals. Three groups of rats were formed: a young control group which was fed a standard chow for rodents; a young obesity-induced group which was fed a high-fat diet for four months; and an elderly control group with rats aged 2.5 years that was fed a standard chow for rodents. Activity/inactivity data were registered through actimetry using infrared actimeter systems in each cage to detect activity. Data were logged on a computer and chronobiological analysis were performed. The results showed diurnal activity (sleep time), nocturnal activity (awake time), amplitude, acrophase, and interdaily stability to be similar between the young obesity-induced group and the elderly control group, but different in the young control group. We have concluded that obesity leads to a chronodisruption status in the body similar to the circadian rhythm degradation observed in the elderly.

  17. Characterization of a highly toxic strain of Bacillus thuringiensis serovar kurstaki very similar to the HD-73 strain.

    Science.gov (United States)

    Reinoso-Pozo, Yaritza; Del Rincón-Castro, Ma Cristina; Ibarra, Jorge E

    2016-09-01

    The LBIT-1200 strain of Bacillus thuringiensis was recently isolated from soil, and showed a 6.4 and 9.5 increase in toxicity, against Manduca sexta and Trichoplusia ni, respectively, compared to HD-73. However, LBIT-1200 was still highly similar to HD-73, including the production of bipyramidal crystals containing only one protein of ∼130 000 kDa, its flagellin gene sequence related to the kurstaki serotype, plasmid and RepPCR patterns similar to HD-73, no production of β-exotoxin and no presence of VIP genes. Sequencing of its cry gene showed the presence of a cry1Ac-type gene with four amino acid differences, including two amino acid replacements in domain III, compared to Cry1Ac1, which may explain its higher toxicity. In conclusion, the LBIT-1200 strain is a variant of the HD-73 strain but shows a much higher toxicity, which makes this new strain an important candidate to be developed as a bioinsecticide, once it passes other tests, throughout its biotechnological development. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. High-dimensional quantum key distribution based on multicore fiber using silicon photonic integrated circuits

    DEFF Research Database (Denmark)

    Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld

    2017-01-01

    is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...

  19. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    International Nuclear Information System (INIS)

    Hayashi, Y.; Hirose, Y.; Seno, Y.

    2016-01-01

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 "3 voxels was obtained.

  20. Scanning three-dimensional x-ray diffraction microscopy using a high-energy microbeam

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Y., E-mail: y-hayashi@mosk.tytlabs.co.jp; Hirose, Y.; Seno, Y. [Toyota Central R& D Toyota Central R& D Labs., Inc., 41-1 Nagakute Aichi 480-1192 Japan (Japan)

    2016-07-27

    A scanning three-dimensional X-ray diffraction (3DXRD) microscope apparatus with a high-energy microbeam was installed at the BL33XU Toyota beamline at SPring-8. The size of the 50 keV beam focused using Kirkpatrick-Baez mirrors was 1.3 μm wide and 1.6 μm high in full width at half maximum. The scanning 3DXRD method was tested for a cold-rolled carbon steel sheet sample. A three-dimensional orientation map with 37 {sup 3} voxels was obtained.

  1. The validation and assessment of machine learning: a game of prediction from high-dimensional data

    DEFF Research Database (Denmark)

    Pers, Tune Hannes; Albrechtsen, A; Holst, C

    2009-01-01

    In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....

  2. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yuxiao; Zhang, Jianming [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Liu, Yang, E-mail: yangl@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Huang, Hui [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Kang, Zhenhui, E-mail: zhkang@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. Black-Right-Pointing-Pointer MPCS was covalently modified by cysteine (MPCS-CO-Cys). Black-Right-Pointing-Pointer MPCS-CO-Cys was first time used in electrochemical detection of heavy metal ions. Black-Right-Pointing-Pointer Heavy metal ions such as Pb{sup 2+} and Cd{sup 2+} can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  3. Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions

    International Nuclear Information System (INIS)

    Zhang, Yuxiao; Zhang, Jianming; Liu, Yang; Huang, Hui; Kang, Zhenhui

    2012-01-01

    Highlights: ► Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. ► MPCS was covalently modified by cysteine (MPCS–CO–Cys). ► MPCS–CO–Cys was first time used in electrochemical detection of heavy metal ions. ► Heavy metal ions such as Pb 2+ and Cd 2+ can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.

  4. Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids

    International Nuclear Information System (INIS)

    Jakeman, John D.; Archibald, Richard; Xiu Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  5. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca

    2013-01-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  6. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza

    2013-08-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  7. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  8. Gene Flow Results in High Genetic Similarity Between Sibiraea (Rosaceae species in the Qinghai-Tibetan Plateau

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Fu

    2016-10-01

    Full Text Available Studying closely related species and divergent populations provides insight into the process of speciation. Previous studies showed that the Sibiraea complex's evolutionary history on the Qinghai-Tibetan Plateau (QTP was confusing and could not be distinguishable on the molecular level. In this study, the genetic structure and gene flow of S. laevigata and S. angustata on the QTP was examined across 45 populations using 8 microsatellite loci. Microsatellites revealed high genetic diversity in Sibiraea populations. Most of the variance was detected within populations (87.45% rather than between species (4.39%. We found no significant correlations between genetic and geographical distances among populations. Bayesian cluster analysis grouped all individuals in the sympatric area of Sibiraea into one cluster and other individuals of S. angustata into another. Divergence history analysis based on the approximate Bayesian computation method indicated that the populations of S. angustata at the sympatric area derived from the admixture of 2 species. The assignment test assigned all individuals to populations of their own species rather than its congeneric species. Consistently, intraspecies were detected rather than interspecies first-generation migrants. The bidirectional gene flow in long-term patterns between the 2 species was asymmetric, with more from S. angustata to S. laevigata. In conclusion, the Sibiraea complex was distinguishable on the molecular level using microsatellite loci. We found that the high genetic similarity of these 2 species resulted from huge bidirectional gene flow, especially on the sympatric area where population admixtures between the species occurred.

  9. THREE-DIMENSIONAL ATMOSPHERIC CIRCULATION OF HOT JUPITERS ON HIGHLY ECCENTRIC ORBITS

    International Nuclear Information System (INIS)

    Kataria, T.; Showman, A. P.; Lewis, N. K.; Fortney, J. J.; Marley, M. S.; Freedman, R. S.

    2013-01-01

    Of the over 800 exoplanets detected to date, over half are on non-circular orbits, with eccentricities as high as 0.93. Such orbits lead to time-variable stellar heating, which has major implications for the planet's atmospheric dynamical regime. However, little is known about the fundamental dynamical regime of such planetary atmospheres, and how it may influence the observations of these planets. Therefore, we present a systematic study of hot Jupiters on highly eccentric orbits using the SPARC/MITgcm, a model which couples a three-dimensional general circulation model (the MITgcm) with a plane-parallel, two-stream, non-gray radiative transfer model. In our study, we vary the eccentricity and orbit-average stellar flux over a wide range. We demonstrate that the eccentric hot Jupiter regime is qualitatively similar to that of planets on circular orbits; the planets possess a superrotating equatorial jet and exhibit large day-night temperature variations. As in Showman and Polvani, we show that the day-night heating variations induce momentum fluxes equatorward to maintain the superrotating jet throughout its orbit. We find that as the eccentricity and/or stellar flux is increased (corresponding to shorter orbital periods), the superrotating jet strengthens and narrows, due to a smaller Rossby deformation radius. For a select number of model integrations, we generate full-orbit light curves and find that the timing of transit and secondary eclipse viewed from Earth with respect to periapse and apoapse can greatly affect what we see in infrared (IR) light curves; the peak in IR flux can lead or lag secondary eclipse depending on the geometry. For those planets that have large temperature differences from dayside to nightside and rapid rotation rates, we find that the light curves can exhibit 'ringing' as the planet's hottest region rotates in and out of view from Earth. These results can be used to explain future observations of eccentric transiting exoplanets.

  10. High-dose bee venom exposure induces similar tolerogenic B-cell responses in allergic patients and healthy beekeepers.

    Science.gov (United States)

    Boonpiyathad, T; Meyer, N; Moniuszko, M; Sokolowska, M; Eljaszewicz, A; Wirz, O F; Tomasiak-Lozowska, M M; Bodzenta-Lukaszyk, A; Ruxrungtham, K; van de Veen, W

    2017-03-01

    The involvement of B cells in allergen tolerance induction remains largely unexplored. This study investigates the role of B cells in this process, by comparing B-cell responses in allergic patients before and during allergen immunotherapy (AIT) and naturally exposed healthy beekeepers before and during the beekeeping season. Circulating B cells were characterized by flow cytometry. Phospholipase A2 (PLA)-specific B cells were identified using dual-color staining with fluorescently labeled PLA. Expression of regulatory B-cell-associated surface markers, interleukin-10, chemokine receptors, and immunoglobulin heavy-chain isotypes, was measured. Specific and total IgG1, IgG4, IgA, and IgE from plasma as well as culture supernatants of PLA-specific cells were measured by ELISA. Strikingly, similar responses were observed in allergic patients and beekeepers after venom exposure. Both groups showed increased frequencies of plasmablasts, PLA-specific memory B cells, and IL-10-secreting CD73 - CD25 + CD71 + B R 1 cells. Phospholipase A2-specific IgG4-switched memory B cells expanded after bee venom exposure. Interestingly, PLA-specific B cells showed increased CCR5 expression after high-dose allergen exposure while CXCR4, CXCR5, CCR6, and CCR7 expression remained unaffected. This study provides the first detailed characterization of allergen-specific B cells before and after bee venom tolerance induction. The observed B-cell responses in both venom immunotherapy-treated patients and naturally exposed beekeepers suggest a similar functional immunoregulatory role for B cells in allergen tolerance in both groups. These findings can be investigated in other AIT models to determine their potential as biomarkers of early and successful AIT responses. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Protein profiling reveals inter-individual protein homogeneity of arachnoid cyst fluid and high qualitative similarity to cerebrospinal fluid

    Directory of Open Access Journals (Sweden)

    Berle Magnus

    2011-05-01

    the majority of abundant proteins in AC fluid also can be found in CSF. Compared to plasma, as many as 104 proteins in AC were not found in the list of 3017 plasma proteins. Conclusions Based on the protein content of AC fluid, our data indicate that temporal AC is a homogenous condition, pointing towards a similar AC filling mechanism for the 14 patients examined. Most of the proteins identified in AC fluid have been identified in CSF, indicating high similarity in the qualitative protein content of AC to CSF, whereas this was not the case between AC and plasma. This indicates that AC is filled with a liquid similar to CSF. As far as we know, this is the first proteomics study that explores the AC fluid proteome.

  12. Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement

    Directory of Open Access Journals (Sweden)

    Xianglei Liu

    2018-01-01

    Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.

  13. High copy number of highly similar mariner-like transposons in planarian (Platyhelminthe): evidence for a trans-phyla horizontal transfer.

    Science.gov (United States)

    Garcia-Fernàndez, J; Bayascas-Ramírez, J R; Marfany, G; Muñoz-Mármol, A M; Casali, A; Baguñà, J; Saló, E

    1995-05-01

    Several DNA sequences similar to the mariner element were isolated and characterized in the platyhelminthe Dugesia (Girardia) tigrina. They were 1,288 bp long, flanked by two 32 bp-inverted repeats, and contained a single 339 amino acid open-reading frame (ORF) encoding the transposase. The number of copies of this element is approximately 8,000 per haploid genome, constituting a member of the middle-repetitive DNA of Dugesia tigrina. Sequence analysis of several elements showed a high percentage of conservation between the different copies. Most of them presented an intact ORF and the standard signals of actively expressed genes, which suggests that some of them are or have recently been functional transposons. The high degree of similarity shared with other mariner elements from some arthropods, together with the fact that this element is undetectable in other planarian species, strongly suggests a case of horizontal transfer between these two distant phyla.

  14. An irregular grid approach for pricing high-dimensional American options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2008-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting. The method is centered around the approximation of the associated complementarity problem on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  15. Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?

    Science.gov (United States)

    Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W

    2018-03-01

    The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.

  16. Reconstruction of high-dimensional states entangled in orbital angular momentum using mutually unbiased measurements

    CSIR Research Space (South Africa)

    Giovannini, D

    2013-06-01

    Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...

  17. Three-dimensionality of field-induced magnetism in a high-temperature superconductor

    DEFF Research Database (Denmark)

    Lake, B.; Lefmann, K.; Christensen, N.B.

    2005-01-01

    Many physical properties of high-temperature superconductors are two-dimensional phenomena derived from their square-planar CuO(2) building blocks. This is especially true of the magnetism from the copper ions. As mobile charge carriers enter the CuO(2) layers, the antiferromagnetism of the parent...

  18. Finding and Visualizing Relevant Subspaces for Clustering High-Dimensional Astronomical Data Using Connected Morphological Operators

    NARCIS (Netherlands)

    Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.

    2010-01-01

    Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.

  19. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    Science.gov (United States)

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  20. Estimating the effect of a variable in a high-dimensional regression model

    DEFF Research Database (Denmark)

    Jensen, Peter Sandholt; Wurtz, Allan

    assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider  properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...

  1. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-01-01

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive

  2. Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization

    NARCIS (Netherlands)

    Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2016-01-01

    textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main

  3. Using Localised Quadratic Functions on an Irregular Grid for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose a method for pricing high-dimensional American options on an irregular grid; the method involves using quadratic functions to approximate the local effect of the Black-Scholes operator.Once such an approximation is known, one can solve the pricing problem by time stepping in an explicit

  4. An Irregular Grid Approach for Pricing High-Dimensional American Options

    NARCIS (Netherlands)

    Berridge, S.J.; Schumacher, J.M.

    2004-01-01

    We propose and test a new method for pricing American options in a high-dimensional setting.The method is centred around the approximation of the associated complementarity problem on an irregular grid.We approximate the partial differential operator on this grid by appealing to the SDE

  5. Pricing and hedging high-dimensional American options : an irregular grid approach

    NARCIS (Netherlands)

    Berridge, S.; Schumacher, H.

    2002-01-01

    We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE

  6. Zero- and two-dimensional hybrid carbon phosphors for high colorimetric purity white light-emission.

    Science.gov (United States)

    Ding, Yamei; Chang, Qing; Xiu, Fei; Chen, Yingying; Liu, Zhengdong; Ban, Chaoyi; Cheng, Shuai; Liu, Juqing; Huang, Wei

    2018-03-01

    Carbon nanomaterials are promising phosphors for white light emission. A facile single-step synthesis method has been developed to prepare zero- and two-dimensional hybrid carbon phosphors for the first time. Zero-dimensional carbon dots (C-dots) emit bright blue luminescence under 365 nm UV light and two-dimensional nanoplates improve the dispersity and film forming ability of C-dots. As a proof-of-concept application, the as-prepared hybrid carbon phosphors emit bright white luminescence in the solid state, and the phosphor-coated blue LEDs exhibit high colorimetric purity white light-emission with a color coordinate of (0.3308, 0.3312), potentially enabling the successful application of white emitting phosphors in the LED field.

  7. Detailed investigation of the bifurcation diagram of capacitively coupled Josephson junctions in high-Tc superconductors and its self similarity

    Science.gov (United States)

    Hamdipour, Mohammad

    2018-04-01

    We study an array of coupled Josephson junction of superconductor/insulator/superconductor type (SIS junction) as a model for high temperature superconductors with layered structure. In the current-voltage characteristics of this system there is a breakpoint region in which a net electric charge appear on superconducting layers, S-layers, of junctions which motivate us to study the charge dynamics in this region. In this paper first of all we show a current voltage characteristics (CVC) of Intrinsic Josephson Junctions (IJJs) with N=3 Junctions, then we show the breakpoint region in that CVC, then we try to investigate the chaos in this region. We will see that at the end of the breakpoint region, behavior of the system is chaotic and Lyapunov exponent become positive. We also study the route by which the system become chaotic and will see this route is bifurcation. Next goal of this paper is to show the self similarity in the bifurcation diagram of the system and detailed analysis of bifurcation diagram.

  8. Pleomorphic lobular carcinoma: is it more similar to a classic lobular cancer or to a high-grade ductal cancer?

    Directory of Open Access Journals (Sweden)

    Costarelli L

    2017-12-01

    Full Text Available Leopoldo Costarelli, Domenico Campagna, Alessandra Ascarelli, Francesco Cavaliere, Maria Helena Colavito, Tatiana Ponzani, Laura Broglia, Massimo La Pinta, Elena Manna, Lucio Fortunato Breast Unit, San Giovanni-Addolorata Hospital, Rome, Italy Background: Pleomorphic invasive lobular carcinoma (P-ILC is an uncommon variety of invasive lobular carcinoma with aggressive clinical features. Little is described in the literature regarding this topic.Materials and methods: We reviewed our experiences from 2010 to 2015 and compared 40 patients with P-ILC, 126 patients with classic-ILC (C-ILC and 574 cases of high-grade invasive ductal carcinoma (HG-IDC. We studied the histologic and immunohistochemical features, clinical presentation and surgical treatment.Results: P-ILC is diagnosed at the same age and tumor diameter as those of the other two histologic types. It is associated more frequently with multiple lymph node metastases and high proliferative index, and HER2/neu is amplified in 10% of cases. In spite of sharing some histologic characteristics with C-ILC (same growth pattern, loss of E-cadherin expression, same genetic pathway, its clinical and pathologic features define an autonomous entity. Its surgical treatment is similar to those of C-ILC and HG-IDC.Conclusion: This is the first review comparing these three pathologic entities. Our findings may be useful in understanding this variety of invasive lobular carcinoma, and further studies are certainly needed in this field. Keywords: breast cancer, lobular cancer, pleomorphic, mastectomy

  9. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    Science.gov (United States)

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  10. Thermal Investigation of Three-Dimensional GaN-on-SiC High Electron Mobility Transistors

    Science.gov (United States)

    2017-07-01

    University of L’Aquila, (2011). 23 Rao, H. & Bosman, G. Hot-electron induced defect generation in AlGaN/GaN high electron mobility transistors. Solid...AFRL-RY-WP-TR-2017-0143 THERMAL INVESTIGATION OF THREE- DIMENSIONAL GaN-on-SiC HIGH ELECTRON MOBILITY TRANSISTORS Qing Hao The University of Arizona...clarification memorandum dated 16 Jan 09. This report is available to the general public, including foreign nationals. Copies may be obtained from the

  11. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    Science.gov (United States)

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  12. Similar Anti-Inflammatory Acute Responses from Moderate-Intensity Continuous and High-Intensity Intermittent Exercise

    Directory of Open Access Journals (Sweden)

    Carolina Cabral-Santos, José Gerosa-Neto, Daniela Sayuri Inoue, Valéria Leme Gonçalves Panissa, Luís Alberto Gobbo, Alessandro Moura Zagatto, Eduardo Zapaterra Campos, Fábio Santos Lira

    2015-12-01

    Full Text Available The purpose of this study was to compare the effect of high-intensity intermittent exercise (HIIE versus volume matched steady state exercise (SSE on inflammatory and metabolic responses. Eight physically active male subjects completed two experimental sessions, a 5-km run on a treadmill either continuously (70% vVO2max or intermittently (1:1 min at vVO2max. Blood samples were collected at rest, immediately, 30 and 60 minutes after the exercise session. Blood was analyzed for glucose, non-ester fatty acid (NEFA, uric acid, lactate, cortisol, and cytokines (IL-6, IL-10 and TNF-α levels. The lactate levels exhibited higher values immediately post-exercise than at rest (HIIE 1.34 ± 0.24 to 7.11 ± 2.85, and SSE 1.35 ± 0.14 to 4.06±1.60 mmol·L-1, p 0.05. Cortisol, IL-6, IL-10 and TNF-α levels showed time-dependent changes under the different conditions (p < 0.05, however, the area under the curve of TNF-α in the SSE were higher than HIIE (p < 0.05, and the area under the curve of IL-6 in the HIIE showed higher values than SSE (p < 0.05. In addition, both exercise conditions promote increased IL-10 levels and IL-10/TNF-α ratio (p < 0.05. In conclusion, our results demonstrated that both exercise protocols, when volume is matched, promote similar inflammatory responses, leading to an anti-inflammatory status; however, the metabolic responses are different.

  13. Generalized reduced rank latent factor regression for high dimensional tensor fields, and neuroimaging-genetic applications.

    Science.gov (United States)

    Tao, Chenyang; Nichols, Thomas E; Hua, Xue; Ching, Christopher R K; Rolls, Edmund T; Thompson, Paul M; Feng, Jianfeng

    2017-01-01

    We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. Copyright © 2016. Published by Elsevier Inc.

  14. Dissecting high-dimensional phenotypes with bayesian sparse factor analysis of genetic covariance matrices.

    Science.gov (United States)

    Runcie, Daniel E; Mukherjee, Sayan

    2013-07-01

    Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.

  15. Three-dimensional true FISP for high-resolution imaging of the whole brain

    International Nuclear Information System (INIS)

    Schmitz, B.; Hagen, T.; Reith, W.

    2003-01-01

    While high-resolution T1-weighted sequences, such as three-dimensional magnetization-prepared rapid gradient-echo imaging, are widely available, there is a lack of an equivalent fast high-resolution sequence providing T2 contrast. Using fast high-performance gradient systems we show the feasibility of three-dimensional true fast imaging with steady-state precession (FISP) to fill this gap. We applied a three-dimensional true-FISP protocol with voxel sizes down to 0.5 x 0.5 x 0.5 mm and acquisition times of approximately 8 min on a 1.5-T Sonata (Siemens, Erlangen, Germany) magnetic resonance scanner. The sequence was included into routine brain imaging protocols for patients with cerebrospinal-fluid-related intracranial pathology. Images from 20 patients and 20 healthy volunteers were evaluated by two neuroradiologists with respect to diagnostic image quality and artifacts. All true-FISP scans showed excellent imaging quality free of artifacts in patients and volunteers. They were valuable for the assessment of anatomical and pathologic aspects of the included patients. High-resolution true-FISP imaging is a valuable adjunct for the exploration and neuronavigation of intracranial pathologies especially if cerebrospinal fluid is involved. (orig.)

  16. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    International Nuclear Information System (INIS)

    Snyder, Abigail C.; Jiao, Yu

    2010-01-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  17. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    Directory of Open Access Journals (Sweden)

    András Király

    2014-01-01

    Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.

  18. Single cell proteomics in biomedicine: High-dimensional data acquisition, visualization, and analysis.

    Science.gov (United States)

    Su, Yapeng; Shi, Qihui; Wei, Wei

    2017-02-01

    New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. One- and two-dimensional sublattices as preconditions for high-Tc superconductivity

    International Nuclear Information System (INIS)

    Krueger, E.

    1989-01-01

    In an earlier paper it was proposed describing superconductivity in the framework of a nonadiabatic Heisenberg model in order to interprete the outstanding symmetry proper ties of the (spin-dependent) Wannier functions in the conduction bands of superconductors. This new group-theoretical model suggests that Cooper pair formation can only be mediated by boson excitations carrying crystal-spin-angular momentum. While in the three-dimensionally isotropic lattices of the standard superconductors phonons are able to transport crystal-spin-angular momentum, this is not true for phonons propagating through the one- or two-dimensional Cu-O sublattices of the high-T c compounds. Therefore, if such an anisotropic material is superconducting, it is necessarily higher-energetic excitations (of well-defined symmetry) which mediate pair formation. This fact is proposed being responsible for the high transition temperatures of these compounds. (author)

  20. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  1. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    Science.gov (United States)

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  2. Dairy Attenuates Weight Gain to a Similar Extent as Exercise in Rats Fed a High-Fat, High-Sugar Diet.

    Science.gov (United States)

    Trottier, Sarah K; MacPherson, Rebecca E K; Knuth, Carly M; Townsend, Logan K; Peppler, Willem T; Mikhaeil, John S; Leveille, Cam F; LeBlanc, Paul J; Shearer, Jane; Reimer, Raylene A; Wright, David C

    2017-10-01

    To compare the individual and combined effects of dairy and endurance exercise training in reducing weight gain and adiposity in a rodent model of diet-induced obesity. An 8-week feeding intervention of a high-fat, high-sugar diet was used to induce obesity in male Sprague-Dawley rats. Rats were then assigned to one of four groups for 6 weeks: (1) casein sedentary (casein-S), (2) casein exercise (casein-E), (3) dairy sedentary (dairy-S), and (4) dairy exercise (dairy-E). Rats were exercise trained by treadmill running 5 d/wk. Dairy-E prevented weight gain to a greater extent than either dairy or exercise alone. Adipose tissue and liver mass were reduced to a similar extent in dairy-S, casein-E, and dairy-E groups. Differences in weight gain were not explained by food intake or total energy expenditure. The total amount of lipid excreted was greater in the dairy-S compared to casein-S and dairy-E groups. This study provides evidence that dairy limits weight gain to a similar extent as exercise training and the combined effects are greater than either intervention alone. While exercise training reduces weight gain through increases in energy expenditure, dairy appears to increase lipid excretion in the feces. © 2017 The Obesity Society.

  3. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Tétreault, Nicolas

    2011-11-09

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  4. High-Efficiency Dye-Sensitized Solar Cell with Three-Dimensional Photoanode

    KAUST Repository

    Té treault, Nicolas; Arsenault, É ric; Heiniger, Leo-Philipp; Soheilnia, Navid; Brillet, Jé ré mie; Moehl, Thomas; Zakeeruddin, Shaik; Ozin, Geoffrey A.; Grä tzel, Michael

    2011-01-01

    Herein, we present a straightforward bottom-up synthesis of a high electron mobility and highly light scattering macroporous photoanode for dye-sensitized solar cells. The dense three-dimensional Al/ZnO, SnO2, or TiO 2 host integrates a conformal passivation thin film to reduce recombination and a large surface-area mesoporous anatase guest for high dye loading. This novel photoanode is designed to improve the charge extraction resulting in higher fill factor and photovoltage for DSCs. An increase in photovoltage of up to 110 mV over state-of-the-art DSC is demonstrated. © 2011 American Chemical Society.

  5. High-performance permanent magnet brushless motors with balanced concentrated windings and similar slot and pole numbers

    International Nuclear Information System (INIS)

    Stumberger, Bojan; Stumberger, Gorazd; Hadziselimovic, Miralem; Hamler, Anton; Trlep, Mladen; Gorican, Viktor; Jesenik, Marko

    2006-01-01

    The paper presents a comparison between the performances of exterior-rotor permanent magnet brushless motors with distributed windings and the performances of exterior-rotor permanent magnet brushless motors with concentrated windings. Finite element method analysis is employed to determine the performance of each motor. It is shown that motors with concentrated windings and similar slot and pole numbers exhibit similar or better performances than motors with distributed windings for brushless AC (BLAC) operation mode and brushless DC (BLDC) operation mode as well

  6. Cooperative simulation of lithography and topography for three-dimensional high-aspect-ratio etching

    Science.gov (United States)

    Ichikawa, Takashi; Yagisawa, Takashi; Furukawa, Shinichi; Taguchi, Takafumi; Nojima, Shigeki; Murakami, Sadatoshi; Tamaoki, Naoki

    2018-06-01

    A topography simulation of high-aspect-ratio etching considering transports of ions and neutrals is performed, and the mechanism of reactive ion etching (RIE) residues in three-dimensional corner patterns is revealed. Limited ion flux and CF2 diffusion from the wide space of the corner is found to have an effect on the RIE residues. Cooperative simulation of lithography and topography is used to solve the RIE residue problem.

  7. Reduced, three-dimensional, nonlinear equations for high-β plasmas including toroidal effects

    International Nuclear Information System (INIS)

    Schmalz, R.

    1980-11-01

    The resistive MHD equations for toroidal plasma configurations are reduced by expanding to the second order in epsilon, the inverse aspect ratio, allowing for high β = μsub(o)p/B 2 of order epsilon. The result is a closed system of nonlinear, three-dimensional equations where the fast magnetohydrodynamic time scale is eliminated. In particular, the equation for the toroidal velocity remains decoupled. (orig.)

  8. Two and dimensional heat analysis inside a high pressure electrical discharge tube

    International Nuclear Information System (INIS)

    Aghanajafi, C.; Dehghani, A. R.; Fallah Abbasi, M.

    2005-01-01

    This article represents the heat transfer analysis for a horizontal high pressure mercury steam tube. To get a more realistic numerical simulation, heat radiation at different wavelength width bands, has been used besides convection and conduction heat transfer. The analysis for different gases with different pressure in two and three dimensional cases has been investigated and the results compared with empirical and semi empirical values. The effect of the environmental temperature on the arc tube temperature is also studied

  9. Controlling chaos in low and high dimensional systems with periodic parametric perturbations

    International Nuclear Information System (INIS)

    Mirus, K.A.; Sprott, J.C.

    1998-06-01

    The effect of applying a periodic perturbation to an accessible parameter of various chaotic systems is examined. Numerical results indicate that perturbation frequencies near the natural frequencies of the unstable periodic orbits of the chaotic systems can result in limit cycles for relatively small perturbations. Such perturbations can also control or significantly reduce the dimension of high-dimensional systems. Initial application to the control of fluctuations in a prototypical magnetic fusion plasma device will be reviewed

  10. GAMLSS for high-dimensional data – a flexible approach based on boosting

    OpenAIRE

    Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas; Schmid, Matthias

    2010-01-01

    Generalized additive models for location, scale and shape (GAMLSS) are a popular semi-parametric modelling approach that, in contrast to conventional GAMs, regress not only the expected mean but every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSS are infeasible for high-dimensional data setups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algo...

  11. Preface [HD3-2015: International meeting on high-dimensional data-driven science

    International Nuclear Information System (INIS)

    2016-01-01

    A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)

  12. High-definition resolution three-dimensional imaging systems in laparoscopic radical prostatectomy: randomized comparative study with high-definition resolution two-dimensional systems.

    Science.gov (United States)

    Kinoshita, Hidefumi; Nakagawa, Ken; Usui, Yukio; Iwamura, Masatsugu; Ito, Akihiro; Miyajima, Akira; Hoshi, Akio; Arai, Yoichi; Baba, Shiro; Matsuda, Tadashi

    2015-08-01

    Three-dimensional (3D) imaging systems have been introduced worldwide for surgical instrumentation. A difficulty of laparoscopic surgery involves converting two-dimensional (2D) images into 3D images and depth perception rearrangement. 3D imaging may remove the need for depth perception rearrangement and therefore have clinical benefits. We conducted a multicenter, open-label, randomized trial to compare the surgical outcome of 3D-high-definition (HD) resolution and 2D-HD imaging in laparoscopic radical prostatectomy (LRP), in order to determine whether an LRP under HD resolution 3D imaging is superior to that under HD resolution 2D imaging in perioperative outcome, feasibility, and fatigue. One-hundred twenty-two patients were randomly assigned to a 2D or 3D group. The primary outcome was time to perform vesicourethral anastomosis (VUA), which is technically demanding and may include a number of technical difficulties considered in laparoscopic surgeries. VUA time was not significantly shorter in the 3D group (26.7 min, mean) compared with the 2D group (30.1 min, mean) (p = 0.11, Student's t test). However, experienced surgeons and 3D-HD imaging were independent predictors for shorter VUA times (p = 0.000, p = 0.014, multivariate logistic regression analysis). Total pneumoperitoneum time was not different. No conversion case from 3D to 2D or LRP to open RP was observed. Fatigue was evaluated by a simulation sickness questionnaire and critical flicker frequency. Results were not different between the two groups. Subjective feasibility and satisfaction scores were significantly higher in the 3D group. Using a 3D imaging system in LRP may have only limited advantages in decreasing operation times over 2D imaging systems. However, the 3D system increased surgical feasibility and decreased surgeons' effort levels without inducing significant fatigue.

  13. Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2009-01-01

    Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold

  14. Dimensional measurement of micro parts with high aspect ratio in HIT-UOI

    Science.gov (United States)

    Dang, Hong; Cui, Jiwen; Feng, Kunpeng; Li, Junying; Zhao, Shiyuan; Zhang, Haoran; Tan, Jiubin

    2016-11-01

    Micro parts with high aspect ratios have been widely used in different fields including aerospace and defense industries, while the dimensional measurement of these micro parts becomes a challenge in the field of precision measurement and instrument. To deal with this contradiction, several probes for the micro parts precision measurement have been proposed by researchers in Center of Ultra-precision Optoelectronic Instrument (UOI), Harbin Institute of Technology (HIT). In this paper, optical fiber probes with structures of spherical coupling(SC) with double optical fibers, micro focal-length collimation (MFL-collimation) and fiber Bragg grating (FBG) are described in detail. After introducing the sensing principles, both advantages and disadvantages of these probes are analyzed respectively. In order to improve the performances of these probes, several approaches are proposed. A two-dimensional orthogonal path arrangement is propounded to enhance the dimensional measurement ability of MFL-collimation probes, while a high resolution and response speed interrogation method based on differential method is used to improve the accuracy and dynamic characteristics of the FBG probes. The experiments for these special structural fiber probes are given with a focus on the characteristics of these probes, and engineering applications will also be presented to prove the availability of them. In order to improve the accuracy and the instantaneity of the engineering applications, several techniques are used in probe integration. The effectiveness of these fiber probes were therefore verified through both the analysis and experiments.

  15. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  16. High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.

    Science.gov (United States)

    Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton

    2017-11-03

    Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.

  17. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  18. CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Malgorzata Nowicka

    2017-05-01

    Full Text Available High dimensional mass and flow cytometry (HDCyto experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots, reporting of clustering results (dimensionality reduction, heatmaps with dendrograms and differential analyses (e.g. plots of aggregated signals.

  19. High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1984-01-01

    Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)

  20. Preparation of three-dimensional graphene foam for high performance supercapacitors

    Directory of Open Access Journals (Sweden)

    Yunjie Ping

    2017-04-01

    Full Text Available Supercapacitor is a new type of energy-storage device, and has been attracted widely attentions. As a two dimensional (2D nanomaterials, graphene is considered to be a promising material of supercapacitor because of its excellent properties involving high electrical conductivity and large surface area. In this paper, the large-scale graphene is successfully fabricated via environmental-friendly electrochemical exfoliation of graphite, and then, the three dimensional (3D graphene foam is prepared by using nickel foam as template and FeCl3/HCl solution as etchant. Compared with the regular 2D graphene paper, the 3D graphene foam electrode shows better electrochemical performance, and exhibits the largest specific capacitance of approximately 128 F/g at the current density of 1 A/g in 6 M KOH electrolyte. It is expected that the 3D graphene foam will have a potential application in the supercapacitors.

  1. Four-dimensional (4D) tracking of high-temperature microparticles

    International Nuclear Information System (INIS)

    Wang, Zhehui; Liu, Q.; Waganaar, W.; Fontanese, J.; James, D.; Munsat, T.

    2016-01-01

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  2. Hierarchical one-dimensional ammonium nickel phosphate microrods for high-performance pseudocapacitors

    CSIR Research Space (South Africa)

    Raju, K

    2015-12-01

    Full Text Available :17629 | DOI: 10.1038/srep17629 www.nature.com/scientificreports Hierarchical One-Dimensional Ammonium Nickel Phosphate Microrods for High-Performance Pseudocapacitors Kumar Raju1 & Kenneth I. Ozoemena1,2 High-performance electrochemical capacitors... OPEN w w w . n a t u r e . c o m / s c i e n t i f i c r e p o r t s / 2S C I E N T I F I C REPORTS | 5:17629 | DOI: 10.1038/srep17629 Hierarchical 1-D and 2-D materials maximize the supercapacitive properties due to their unique ability to permit ion...

  3. On the use of multi-dimensional scaling and electromagnetic tracking in high dose rate brachytherapy

    Science.gov (United States)

    Götz, Th I.; Ermer, M.; Salas-González, D.; Kellermeier, M.; Strnad, V.; Bert, Ch; Hensel, B.; Tomé, A. M.; Lang, E. W.

    2017-10-01

    High dose rate brachytherapy affords a frequent reassurance of the precise dwell positions of the radiation source. The current investigation proposes a multi-dimensional scaling transformation of both data sets to estimate dwell positions without any external reference. Furthermore, the related distributions of dwell positions are characterized by uni—or bi—modal heavy—tailed distributions. The latter are well represented by α—stable distributions. The newly proposed data analysis provides dwell position deviations with high accuracy, and, furthermore, offers a convenient visualization of the actual shapes of the catheters which guide the radiation source during the treatment.

  4. High-dimensional data: p >> n in mathematical statistics and bio-medical applications

    OpenAIRE

    Van De Geer, Sara A.; Van Houwelingen, Hans C.

    2004-01-01

    The workshop 'High-dimensional data: p >> n in mathematical statistics and bio-medical applications' was held at the Lorentz Center in Leiden from 9 to 20 September 2002. This special issue of Bernoulli contains a selection of papers presented at that workshop. ¶ The introduction of high-throughput micro-array technology to measure gene-expression levels and the publication of the pioneering paper by Golub et al. (1999) has brought to life a whole new branch of data analysis under the name of...

  5. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  6. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    Science.gov (United States)

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  7. Five-dimensional visualization of phase transition in BiNiO3 under high pressure

    International Nuclear Information System (INIS)

    Liu, Yijin; Wang, Junyue; Yang, Wenge; Azuma, Masaki; Mao, Wendy L.

    2014-01-01

    Colossal negative thermal expansion was recently discovered in BiNiO 3 associated with a low density to high density phase transition under high pressure. The varying proportion of co-existing phases plays a key role in the macroscopic behavior of this material. Here, we utilize a recently developed X-ray Absorption Near Edge Spectroscopy Tomography method and resolve the mixture of high/low pressure phases as a function of pressure at tens of nanometer resolution taking advantage of the charge transfer during the transition. This five-dimensional (X, Y, Z, energy, and pressure) visualization of the phase boundary provides a high resolution method to study the interface dynamics of high/low pressure phase

  8. Characterization of differentially expressed genes using high-dimensional co-expression networks

    DEFF Research Database (Denmark)

    Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.

    2010-01-01

    We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...

  9. High-resolution coherent three-dimensional spectroscopy of Br2.

    Science.gov (United States)

    Chen, Peter C; Wells, Thresa A; Strangfeld, Benjamin R

    2013-07-25

    In the past, high-resolution spectroscopy has been limited to small, simple molecules that yield relatively uncongested spectra. Larger and more complex molecules have a higher density of peaks and are susceptible to complications (e.g., effects from conical intersections) that can obscure the patterns needed to resolve and assign peaks. Recently, high-resolution coherent two-dimensional (2D) spectroscopy has been used to resolve and sort peaks into easily identifiable patterns for molecules where pattern-recognition has been difficult. For very highly congested spectra, however, the ability to resolve peaks using coherent 2D spectroscopy is limited by the bandwidth of instrumentation. In this article, we introduce and investigate high-resolution coherent three-dimensional spectroscopy (HRC3D) as a method for dealing with heavily congested systems. The resulting patterns are unlike those in high-resolution coherent 2D spectra. Analysis of HRC3D spectra could provide a means for exploring the spectroscopy of large and complex molecules that have previously been considered too difficult to study.

  10. Three-dimensional graphene/polyaniline composite material for high-performance supercapacitor applications

    International Nuclear Information System (INIS)

    Liu, Huili; Wang, Yi; Gou, Xinglong; Qi, Tao; Yang, Jun; Ding, Yulong

    2013-01-01

    Highlights: ► A novel 3D graphene showed high specific surface area and large mesopore volume. ► Aniline monomer was polymerized in the presence of 3D graphene at room temperature. ► The supercapacitive properties were studied by CV and charge–discharge tests. ► The composite show a high gravimetric capacitance and good cyclic stability. ► The 3D graphene/polyaniline has never been report before our work. -- Abstract: A novel three-dimensional (3D) graphene/polyaniline nanocomposite material which is synthesized using in situ polymerization of aniline monomer on the graphene surface is reported as an electrode for supercapacitors. The morphology and structure of the material are characterized by scanning electron microscopy (SEM), transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD). The electrochemical properties of the resulting materials are systematically studied using cyclic voltammetry (CV) and constant current charge–discharge tests. A high gravimetric capacitance of 463 F g −1 at a scan rate of 1 mV s −1 is obtained by means of CVs with 3 mol L −1 KOH as the electrolyte. In addition, the composite material shows only 9.4% capacity loss after 500 cycles, indicating better cyclic stability for supercapacitor applications. The high specific surface area, large mesopore volume and three-dimensional nanoporous structure of 3D graphene could contribute to the high specific capacitance and good cyclic life

  11. Three-Dimensional Numerical Analysis of an Operating Helical Rotor Pump at High Speeds and High Pressures including Cavitation

    Directory of Open Access Journals (Sweden)

    Zhou Yang

    2017-01-01

    Full Text Available High pressures, high speeds, low noise and miniaturization is the direction of development in hydraulic pump. According to the development trend, an operating helical rotor pump (HRP at high speeds and high pressures has been designed and produced, which rotational speed can reach 12000r/min and outlet pressure is as high as 25MPa. Three-dimensional simulation with and without cavitation inside the HRP is completed by the means of the computational fluid dynamics (CFD in this paper, which contributes to understand the complex fluid flow inside it. Moreover, the influences of the rotational speeds of the HRP with and without cavitation has been simulated at 25MPa.

  12. TSAR: a program for automatic resonance assignment using 2D cross-sections of high dimensionality, high-resolution spectra

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor [University of Warsaw, Faculty of Chemistry (Poland); Billeter, Martin, E-mail: martin.billeter@chem.gu.se [University of Gothenburg, Biophysics Group, Department of Chemistry and Molecular Biology (Sweden)

    2012-09-15

    While NMR studies of proteins typically aim at structure, dynamics or interactions, resonance assignments represent in almost all cases the initial step of the analysis. With increasing complexity of the NMR spectra, for example due to decreasing extent of ordered structure, this task often becomes both difficult and time-consuming, and the recording of high-dimensional data with high-resolution may be essential. Random sampling of the evolution time space, combined with sparse multidimensional Fourier transform (SMFT), allows for efficient recording of very high dimensional spectra ({>=}4 dimensions) while maintaining high resolution. However, the nature of this data demands for automation of the assignment process. Here we present the program TSAR (Tool for SMFT-based Assignment of Resonances), which exploits all advantages of SMFT input. Moreover, its flexibility allows to process data from any type of experiments that provide sequential connectivities. The algorithm was tested on several protein samples, including a disordered 81-residue fragment of the {delta} subunit of RNA polymerase from Bacillus subtilis containing various repetitive sequences. For our test examples, TSAR achieves a high percentage of assigned residues without any erroneous assignments.

  13. New self-similar radiation-hydrodynamics solutions in the high-energy density, equilibrium diffusion limit

    International Nuclear Information System (INIS)

    Lane, Taylor K; McClarren, Ryan G

    2013-01-01

    This work presents semi-analytic solutions to a radiation-hydrodynamics problem of a radiation source driving an initially cold medium. Our solutions are in the equilibrium diffusion limit, include material motion and allow for radiation-dominated situations where the radiation energy is comparable to (or greater than) the material internal energy density. As such, this work is a generalization of the classical Marshak wave problem that assumes no material motion and that the radiation energy is negligible. Including radiation energy density in the model serves to slow down the wave propagation. The solutions provide insight into the impact of radiation energy and material motion, as well as present a novel verification test for radiation transport packages. As a verification test, the solution exercises the radiation–matter coupling terms and their v/c treatment without needing a hydrodynamics solve. An example comparison between the self-similar solution and a numerical code is given. Tables of the self-similar solutions are also provided. (paper)

  14. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  15. On-chip generation of high-dimensional entangled quantum states and their coherent control.

    Science.gov (United States)

    Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto

    2017-06-28

    Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.

  16. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and "hidden" dimensions.

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.

  17. Pure Cs4PbBr6: Highly Luminescent Zero-Dimensional Perovskite Solids

    KAUST Repository

    Saidaminov, Makhsud I.

    2016-09-26

    So-called zero-dimensional perovskites, such as Cs4PbBr6, promise outstanding emissive properties. However, Cs4PbBr6 is mostly prepared by melting of precursors that usually leads to a coformation of undesired phases. Here, we report a simple low-temperature solution-processed synthesis of pure Cs4PbBr6 with remarkable emission properties. We found that pure Cs4PbBr6 in solid form exhibits a 45% photoluminescence quantum yield (PLQY), in contrast to its three-dimensional counterpart, CsPbBr3, which exhibits more than 2 orders of magnitude lower PLQY. Such a PLQY of Cs4PbBr6 is significantly higher than that of other solid forms of lower-dimensional metal halide perovskite derivatives and perovskite nanocrystals. We attribute this dramatic increase in PL to the high exciton binding energy, which we estimate to be ∼353 meV, likely induced by the unique Bergerhoff–Schmitz–Dumont-type crystal structure of Cs4PbBr6, in which metal-halide-comprised octahedra are spatially confined. Our findings bring this class of perovskite derivatives to the forefront of color-converting and light-emitting applications.

  18. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm `boundary separated checkerboard sweep method` appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it`s similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  19. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    International Nuclear Information System (INIS)

    Okumura, Keisuke

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm 'boundary separated checkerboard sweep method' appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it's similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  20. Research on Non-Similarity about Thermal Deformation Error of Mechanical Parts in High-accuracy Measurement

    International Nuclear Information System (INIS)

    Luo, Z; Fei, Y T

    2006-01-01

    Expanding with heat and contracting with cold are common physical phenomenon in the nature. The conventional theories and calculations of thermal deformation are approximate and linear, can only be applied in normal or low precision field. The thermal deformation error of mechanical parts doesn't follow the conventional linear formula, it relates to all physical dimension of the mechanical part, and the deformation can be indicated by a nonlinear formula of physical dimensions. A theory on non-similarity about thermal deformation error of mechanical parts is presented. Studies on some common mechanical parts in precision technology have went on and the mathematical models have been set up, hollow piece, gear and cube are included. The experimental results also make it clear that these models are more logical than traditional models

  1. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huttmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.; Bednarczyk, P.

    1992-01-01

    High resolution γ-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig

  2. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  3. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S; Huttmeier, U J; France, G de; Haas, B; Romain, P; Theisen, Ch; Vivien, J P; Zen, J [Centre National de la Recherche Scientifique (CNRS), 67 - Strasbourg (France); Bednarczyk, P [Institute of Nuclear Physics, Cracow (Poland)

    1992-08-01

    High resolution {gamma}-ray multi-detectors capable of measuring high-fold coincidences with a large efficiency are presently under construction (EUROGAM, GASP, GAMMASPHERE). The future experimental progress in our understanding of nuclear structure at high spin critically depends on our ability to analyze the data in a multi-dimensional space and to resolve small photopeaks of interest from the generally large background. Development of programs to process such high-fold events is still in its infancy and only the 3-fold case has been treated so far. As a contribution to the software development associated with the EUROGAM spectrometer, we have written and tested the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases. The tests were performed on events generated with a Monte Carlo simulation and also on experimental data (triples) recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (author). 7 refs., 3 tabs., 1 fig.

  4. Highly Efficient Broadband Yellow Phosphor Based on Zero-Dimensional Tin Mixed-Halide Perovskite.

    Science.gov (United States)

    Zhou, Chenkun; Tian, Yu; Yuan, Zhao; Lin, Haoran; Chen, Banghao; Clark, Ronald; Dilbeck, Tristan; Zhou, Yan; Hurley, Joseph; Neu, Jennifer; Besara, Tiglet; Siegrist, Theo; Djurovich, Peter; Ma, Biwu

    2017-12-27

    Organic-inorganic hybrid metal halide perovskites have emerged as a highly promising class of light emitters, which can be used as phosphors for optically pumped white light-emitting diodes (WLEDs). By controlling the structural dimensionality, metal halide perovskites can exhibit tunable narrow and broadband emissions from the free-exciton and self-trapped excited states, respectively. Here, we report a highly efficient broadband yellow light emitter based on zero-dimensional tin mixed-halide perovskite (C 4 N 2 H 14 Br) 4 SnBr x I 6-x (x = 3). This rare-earth-free ionically bonded crystalline material possesses a perfect host-dopant structure, in which the light-emitting metal halide species (SnBr x I 6-x 4- , x = 3) are completely isolated from each other and embedded in the wide band gap organic matrix composed of C 4 N 2 H 14 Br - . The strongly Stokes-shifted broadband yellow emission that peaked at 582 nm from this phosphor, which is a result of excited state structural reorganization, has an extremely large full width at half-maximum of 126 nm and a high photoluminescence quantum efficiency of ∼85% at room temperature. UV-pumped WLEDs fabricated using this yellow emitter together with a commercial europium-doped barium magnesium aluminate blue phosphor (BaMgAl 10 O 17 :Eu 2+ ) can exhibit high color rendering indexes of up to 85.

  5. A high-speed computerized tomography image reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1983-01-01

    The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)

  6. The role of three-dimensional high-definition laparoscopic surgery for gynaecology.

    Science.gov (United States)

    Usta, Taner A; Gundogdu, Elif C

    2015-08-01

    This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.

  7. Collective excitations and superconductivity in reduced dimensional systems - Possible mechanism for high Tc

    International Nuclear Information System (INIS)

    Santoyo, B.M.

    1989-01-01

    The author studies in full detail a possible mechanism of superconductivity in slender electronic systems of finite cross section. This mechanism is based on the pairing interaction mediated by the multiple modes of acoustic plasmons in these structures. First, he shows that multiple non-Landau-damped acoustic plasmon modes exist for electrons in a quasi-one dimensional wire at finite temperatures. These plasmons are of two basic types. The first one is made up by the collective longitudinal oscillations of the electrons essentially of a given transverse energy level oscillating against the electrons in the neighboring transverse energy level. The modes are called Slender Acoustic Plasmons or SAP's. The other mode is the quasi-one dimensional acoustic plasmon mode in which all the electrons oscillate together in phase among themselves but out of phase against the positive ion background. He shows numerically and argues physically that even for a temperature comparable to the mode separation Δω the SAP's and the quasi-one dimensional plasmon persist. Then, based on a clear physical picture, he develops in terms of the dielectric function a theory of superconductivity capable of treating the simultaneous participation of multiple bosonic modes that mediate the pairing interaction. The effect of mode damping is then incorporated in a simple manner that is free of the encumbrance of the strong-coupling, Green's function formalism usually required for the retardation effect. Explicit formulae including such damping are derived for the critical temperature T c and the energy gap Δ 0 . With those modes and armed with such a formalism, he proceeds to investigate a possible superconducting mechanism for high T c in quasi-one dimensional single-wire and multi-wire systems

  8. Presence of Stenotrophomonas maltophilia exhibiting high genetic similarity to clinical isolates in final effluents of pig farm wastewater treatment plants.

    Science.gov (United States)

    Kim, Young-Ji; Park, Jin-Hyeong; Seo, Kun-Ho

    2018-03-01

    Although the prevalence of community-acquired Stenotrophomonas maltophilia infections is sharply increasing, the sources and likely transmission routes of this bacterium are poorly understood. We studied the significance of the presence of S. maltophilia in final effluents and receiving rivers of pig farm wastewater treatment plants (WWTPs). The loads and antibiotic resistance profiles of S. maltophilia in final effluents were assessed. Antibiotic resistance determinants and biofilm formation genes were detected by PCR, and genetic similarity to clinical isolates was investigated using multilocus sequence typing (MLST). S. maltophilia was recovered from final effluents at two of three farms and one corresponding receiving river. Tests of resistance to antibiotics recommended for S. maltophilia infection revealed that for each agent, at least one isolate was classified as resistant or intermediate, with the exception of minocycline. Furthermore, multidrug resistant S. maltophilia susceptible to antibiotics of only two categories was isolated and found to carry the sul2 gene, conferring trimethoprim/sulfamethoxazole resistance. All isolates carried spgM, encoding a major factor in biofilm formation. MLST revealed that isolates of the same sequence type (ST; ST189) were present in both effluent and receiving river samples, and phylogenetic analysis showed that all of the STs identified in this study clustered with clinical isolates. Moreover, one isolate (ST192) recovered in this investigation demonstrated 99.61% sequence identity with a clinical isolate (ST98) associated with a fatal infection in South Korea. Thus, the pathogenicity of the isolates reported here is likely similar to that of those from clinical environments, and WWTPs may play a role as a source of S. maltophilia from which this bacterium spreads to human communities. To the best of our knowledge, this represents the first report of S. maltophilia in pig farm WWTPs. Our results indicate that

  9. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input

  10. The Figured Worlds of High School Science Teachers: Uncovering Three-Dimensional Assessment Decisions

    Science.gov (United States)

    Ewald, Megan

    As a result of recent mandates of the Next Generation Science Standards, assessments are a "system of meaning" amidst a paradigm shift toward three-dimensional assessments. This study is motivated by two research questions: 1) how do high school science teachers describe their processes of decision-making in the development and use of three-dimensional assessments and 2) how do high school science teachers negotiate their identities as assessors in designing three-dimensional assessments. An important factor in teachers' assessment decision making is how they identify themselves as assessors. Therefore, this study investigated the teachers' roles as assessors through the Sociocultural Identity Theory. The most important contribution from this study is the emergent teacher assessment sub-identities: the modifier-recycler , the feeler-finder, and the creator. Using a qualitative phenomenological research design, focus groups, three-series interviews, think-alouds, and document analysis were utilized in this study. These qualitative methods were chosen to elicit rich conversations among teachers, make meaning of the teachers' experiences through in-depth interviews, amplify the thought processes of individual teachers while making assessment decisions, and analyze assessment documents in relation to teachers' perspectives. The findings from this study suggest that--of the 19 participants--only two teachers could consistently be identified as creators and aligned their assessment practices with NGSS. However, assessment sub-identities are not static and teachers may negotiate their identities from one moment to the next within socially constructed realms of interpretation known as figured worlds. Because teachers are positioned in less powerful figured worlds within the dominant discourse of standardization, this study raises awareness as to how the external pressures from more powerful figured worlds socially construct teachers' identities as assessors. For teachers

  11. dimensional nonlinear evolution equations

    Indian Academy of Sciences (India)

    in real-life situations, it is important to find their exact solutions. Further, in ... But only little work is done on the high-dimensional equations. .... Similarly, to determine the values of d and q, we balance the linear term of the lowest order in eq.

  12. Simulating three-dimensional nonthermal high-energy photon emission in colliding-wind binaries

    Energy Technology Data Exchange (ETDEWEB)

    Reitberger, K.; Kissmann, R.; Reimer, A.; Reimer, O., E-mail: klaus.reitberger@uibk.ac.at [Institut für Astro- und Teilchenphysik and Institut für Theoretische Physik, Leopold-Franzens-Universität Innsbruck, A-6020 Innsbruck (Austria)

    2014-07-01

    Massive stars in binary systems have long been regarded as potential sources of high-energy γ rays. The emission is principally thought to arise in the region where the stellar winds collide and accelerate relativistic particles which subsequently emit γ rays. On the basis of a three-dimensional distribution function of high-energy particles in the wind collision region—as obtained by a numerical hydrodynamics and particle transport model—we present the computation of the three-dimensional nonthermal photon emission for a given line of sight. Anisotropic inverse Compton emission is modeled using the target radiation field of both stars. Photons from relativistic bremsstrahlung and neutral pion decay are computed on the basis of local wind plasma densities. We also consider photon-photon opacity effects due to the dense radiation fields of the stars. Results are shown for different stellar separations of a given binary system comprising of a B star and a Wolf-Rayet star. The influence of orbital orientation with respect to the line of sight is also studied by using different orbital viewing angles. For the chosen electron-proton injection ratio of 10{sup –2}, we present the ensuing photon emission in terms of two-dimensional projections maps, spectral energy distributions, and integrated photon flux values in various energy bands. Here, we find a transition from hadron-dominated to lepton-dominated high-energy emission with increasing stellar separations. In addition, we confirm findings from previous analytic modeling that the spectral energy distribution varies significantly with orbital orientation.

  13. High-speed three-dimensional plasma temperature determination of axially symmetric free-burning arcs

    International Nuclear Information System (INIS)

    Bachmann, B; Ekkert, K; Bachmann, J-P; Marques, J-L; Schein, J; Kozakov, R; Gött, G; Schöpp, H; Uhrlandt, D

    2013-01-01

    In this paper we introduce an experimental technique that allows for high-speed, three-dimensional determination of electron density and temperature in axially symmetric free-burning arcs. Optical filters with narrow spectral bands of 487.5–488.5 nm and 689–699 nm are utilized to gain two-dimensional spectral information of a free-burning argon tungsten inert gas arc. A setup of mirrors allows one to image identical arc sections of the two spectral bands onto a single camera chip. Two-different Abel inversion algorithms have been developed to reconstruct the original radial distribution of emission coefficients detected with each spectral window and to confirm the results. With the assumption of local thermodynamic equilibrium we calculate emission coefficients as a function of temperature by application of the Saha equation, the ideal gas law, the quasineutral gas condition and the NIST compilation of spectral lines. Ratios of calculated emission coefficients are compared with measured ones yielding local plasma temperatures. In the case of axial symmetry the three-dimensional plasma temperature distributions have been determined at dc currents of 100, 125, 150 and 200 A yielding temperatures up to 20000 K in the hot cathode region. These measurements have been validated by four different techniques utilizing a high-resolution spectrometer at different positions in the plasma. Plasma temperatures show good agreement throughout the different methods. Additionally spatially resolved transient plasma temperatures have been measured of a dc pulsed process employing a high-speed frame rate of 33000 frames per second showing the modulation of the arc isothermals with time and providing information about the sensitivity of the experimental approach. (paper)

  14. THREE-DIMENSIONAL OBSERVATIONS ON THICK BIOLOGICAL SPECIMENS BY HIGH VOLTAGE ELECTRON MICROSCOPY

    Directory of Open Access Journals (Sweden)

    Tetsuji Nagata

    2011-05-01

    Full Text Available Thick biological specimens prepared as whole mount cultured cells or thick sections from embedded tissues were stained with histochemical reactions, such as thiamine pyrophosphatase, glucose-6-phosphatase, cytochrome oxidase, acid phosphatase, DAB reactions and radioautography, to observe 3-D ultrastructures of cell organelles producing stereo-pairs by high voltage electron microscopy at accerelating voltages of 400-1000 kV. The organelles demonstrated were Golgi apparatus, endoplasmic reticulum, mitochondria, lysosomes, peroxisomes, pinocytotic vesicles and incorporations of radioactive compounds. As the results, those cell organelles were observed 3- dimensionally and the relative relationships between these organelles were demonstrated.

  15. Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes

    Science.gov (United States)

    Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong

    2018-04-01

    In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.

  16. The high exponent limit $p \\to \\infty$ for the one-dimensional nonlinear wave equation

    OpenAIRE

    Tao, Terence

    2009-01-01

    We investigate the behaviour of solutions $\\phi = \\phi^{(p)}$ to the one-dimensional nonlinear wave equation $-\\phi_{tt} + \\phi_{xx} = -|\\phi|^{p-1} \\phi$ with initial data $\\phi(0,x) = \\phi_0(x)$, $\\phi_t(0,x) = \\phi_1(x)$, in the high exponent limit $p \\to \\infty$ (holding $\\phi_0, \\phi_1$ fixed). We show that if the initial data $\\phi_0, \\phi_1$ are smooth with $\\phi_0$ taking values in $(-1,1)$ and obey a mild non-degeneracy condition, then $\\phi$ converges locally uniformly to a piecewis...

  17. Two-dimensional gold nanostructures with high activity for selective oxidation of carbon–hydrogen bonds

    KAUST Repository

    Wang, Liang

    2015-04-22

    Efficient synthesis of stable two-dimensional (2D) noble metal catalysts is a challenging topic. Here we report the facile synthesis of 2D gold nanosheets via a wet chemistry method, by using layered double hydroxide as the template. Detailed characterization with electron microscopy and X-ray photoelectron spectroscopy demonstrates that the nanosheets are negatively charged and [001] oriented with thicknesses varying from single to a few atomic layers. X-ray absorption spectroscopy reveals unusually low gold–gold coordination numbers. These gold nanosheets exhibit high catalytic activity and stability in the solvent-free selective oxidation of carbon–hydrogen bonds with molecular oxygen.

  18. Electric Field Guided Assembly of One-Dimensional Nanostructures for High Performance Sensors

    Directory of Open Access Journals (Sweden)

    Wing Kam Liu

    2012-05-01

    Full Text Available Various nanowire or nanotube-based devices have been demonstrated to fulfill the anticipated future demands on sensors. To fabricate such devices, electric field-based methods have demonstrated a great potential to integrate one-dimensional nanostructures into various forms. This review paper discusses theoretical and experimental aspects of the working principles, the assembled structures, and the unique functions associated with electric field-based assembly. The challenges and opportunities of the assembly methods are addressed in conjunction with future directions toward high performance sensors.

  19. High-dimensional chaos from self-sustained collisions of solitons

    Energy Technology Data Exchange (ETDEWEB)

    Yildirim, O. Ozgur, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Cavium, Inc., 600 Nickerson Rd., Marlborough, Massachusetts 01752 (United States); Ham, Donhee, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Harvard University, 33 Oxford St., Cambridge, Massachusetts 02138 (United States)

    2014-06-16

    We experimentally demonstrate chaos generation based on collisions of electrical solitons on a nonlinear transmission line. The nonlinear line creates solitons, and an amplifier connected to it provides gain to these solitons for their self-excitation and self-sustenance. Critically, the amplifier also provides a mechanism to enable and intensify collisions among solitons. These collisional interactions are of intrinsically nonlinear nature, modulating the phase and amplitude of solitons, thus causing chaos. This chaos generated by the exploitation of the nonlinear wave phenomena is inherently high-dimensional, which we also demonstrate.

  20. Inferring biological tasks using Pareto analysis of high-dimensional data.

    Science.gov (United States)

    Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri

    2015-03-01

    We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.

  1. A novel algorithm of artificial immune system for high-dimensional function numerical optimization

    Institute of Scientific and Technical Information of China (English)

    DU Haifeng; GONG Maoguo; JIAO Licheng; LIU Ruochen

    2005-01-01

    Based on the clonal selection theory and immune memory theory, a novel artificial immune system algorithm, immune memory clonal programming algorithm (IMCPA), is put forward. Using the theorem of Markov chain, it is proved that IMCPA is convergent. Compared with some other evolutionary programming algorithms (like Breeder genetic algorithm), IMCPA is shown to be an evolutionary strategy capable of solving complex machine learning tasks, like high-dimensional function optimization, which maintains the diversity of the population and avoids prematurity to some extent, and has a higher convergence speed.

  2. Three-dimensional propagation and absorption of high frequency Gaussian beams in magnetoactive plasmas

    International Nuclear Information System (INIS)

    Nowak, S.; Orefice, A.

    1994-01-01

    In today's high frequency systems employed for plasma diagnostics, power heating, and current drive the behavior of the wave beams is appreciably affected by the self-diffraction phenomena due to their narrow collimation. In the present article the three-dimensional propagation of Gaussian beams in inhomogeneous and anisotropic media is analyzed, starting from a properly formulated dispersion relation. Particular attention is paid, in the case of electromagnetic electron cyclotron (EC) waves, to the toroidal geometry characterizing tokamak plasmas, to the power density evolution on the advancing wave fronts, and to the absorption features occurring when a beam crosses an EC resonant layer

  3. Computing and visualizing time-varying merge trees for high-dimensional data

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  4. Two-dimensional gold nanostructures with high activity for selective oxidation of carbon-hydrogen bonds

    Science.gov (United States)

    Wang, Liang; Zhu, Yihan; Wang, Jian-Qiang; Liu, Fudong; Huang, Jianfeng; Meng, Xiangju; Basset, Jean-Marie; Han, Yu; Xiao, Feng-Shou

    2015-04-01

    Efficient synthesis of stable two-dimensional (2D) noble metal catalysts is a challenging topic. Here we report the facile synthesis of 2D gold nanosheets via a wet chemistry method, by using layered double hydroxide as the template. Detailed characterization with electron microscopy and X-ray photoelectron spectroscopy demonstrates that the nanosheets are negatively charged and [001] oriented with thicknesses varying from single to a few atomic layers. X-ray absorption spectroscopy reveals unusually low gold-gold coordination numbers. These gold nanosheets exhibit high catalytic activity and stability in the solvent-free selective oxidation of carbon-hydrogen bonds with molecular oxygen.

  5. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    Science.gov (United States)

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  6. The proximal first exon architecture of the murine ghrelin gene is highly similar to its human orthologue

    Directory of Open Access Journals (Sweden)

    Seim Inge

    2009-05-01

    Full Text Available Abstract Background The murine ghrelin gene (Ghrl, originally sequenced from stomach tissue, contains five exons and a single transcription start site in a short, 19 bp first exon (exon 0. We recently isolated several novel first exons of the human ghrelin gene and found evidence of a complex transcriptional repertoire. In this report, we examined the 5' exons of the murine ghrelin orthologue in a range of tissues using 5' RACE. Findings 5' RACE revealed two transcription start sites (TSSs in exon 0 and four TSSs in intron 0, which correspond to 5' extensions of exon 1. Using quantitative, real-time RT-PCR (qRT-PCR, we demonstrated that extended exon 1 containing Ghrl transcripts are largely confined to the spleen, adrenal gland, stomach, and skin. Conclusion We demonstrate that multiple transcription start sites are present in exon 0 and an extended exon 1 of the murine ghrelin gene, similar to the proximal first exon organisation of its human orthologue. The identification of several transcription start sites in intron 0 of mouse ghrelin (resulting in an extension of exon 1 raises the possibility that developmental-, cell- and tissue-specific Ghrl mRNA species are created by employing alternative promoters and further studies of the murine ghrelin gene are warranted.

  7. Rocky Mountain Spotted Fever Characterization and Comparison to Similar Illnesses in a Highly Endemic Area—Arizona, 2002–2011

    Science.gov (United States)

    Traeger, Marc S.; Regan, Joanna J.; Humpherys, Dwight; Mahoney, Dianna L.; Martinez, Michelle; Emerson, Ginny L.; Tack, Danielle M.; Geissler, Aimee; Yasmin, Seema; Lawson, Regina; Hamilton, Charlene; Williams, Velda; Levy, Craig; Komatsu, Kenneth; McQuiston, Jennifer H.; Yost, David A.

    2015-01-01

    Background Rocky Mountain spotted fever (RMSF) has emerged as a significant cause of morbidity and mortality since 2002 on tribal lands in Arizona. The explosive nature of this outbreak and the recognition of an unexpected tick vector, Rhipicephalus sanguineus, prompted an investigation to characterize RMSF in this unique setting and compare RMSF cases to similar illnesses. Methods We compared medical records of 205 patients with RMSF and 175 with non-RMSF illnesses that prompted RMSF testing during 2002–2011 from 2 Indian reservations in Arizona. Results RMSF cases in Arizona occurred year-round and peaked later (July–September) than RMSF cases reported from other US regions. Cases were younger (median age, 11 years) and reported fever and rash less frequently, compared to cases from other US regions. Fever was present in 81% of cases but not significantly different from that in patients with non-RMSF illnesses. Classic laboratory abnormalities such as low sodium and platelet counts had small and subtle differences between cases and patients with non-RMSF illnesses. Imaging studies reflected the variability and complexity of the illness but proved unhelpful in clarifying the early diagnosis. Conclusions RMSF epidemiology in this region appears different than RMSF elsewhere in the United States. No specific pattern of signs, symptoms, or laboratory findings occurred with enough frequency to consistently differentiate RMSF from other illnesses. Due to the nonspecific and variable nature of RMSF presentations, clinicians in this region should aggressively treat febrile illnesses and sepsis with doxycycline for suspected RMSF. PMID:25697743

  8. Three-dimensional interconnected porous graphitic carbon derived from rice straw for high performance supercapacitors

    Science.gov (United States)

    Jin, Hong; Hu, Jingpeng; Wu, Shichao; Wang, Xiaolan; Zhang, Hui; Xu, Hui; Lian, Kun

    2018-04-01

    Three-dimensional interconnected porous graphitic carbon materials are synthesized via a combination of graphitization and activation process with rice straw as the carbon source. The physicochemical properties of the three-dimensional interconnected porous graphitic carbon materials are characterized by Nitrogen adsorption/desorption, Fourier-transform infrared spectroscopy, X-ray diffraction, Raman spectroscopy, Scanning electron microscopy and Transmission electron microscopy. The results demonstrate that the as-prepared carbon is a high surface area carbon material (a specific surface area of 3333 m2 g-1 with abundant mesoporous and microporous structures). And it exhibits superb performance in symmetric double layer capacitors with a high specific capacitance of 400 F g-1 at a current density of 0.1 A g-1, good rate performance with 312 F g-1 under a current density of 5 A g-1 and favorable cycle stability with 6.4% loss after 10000 cycles at a current density of 5 A g-1 in the aqueous electrolyte of 6M KOH. Thus, rice straw is a promising carbon source for fabricating inexpensive, sustainable and high performance supercapacitors' electrode materials.

  9. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2017-04-25

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  10. Assessing the detectability of antioxidants in two-dimensional high-performance liquid chromatography.

    Science.gov (United States)

    Bassanese, Danielle N; Conlan, Xavier A; Barnett, Neil W; Stevenson, Paul G

    2015-05-01

    This paper explores the analytical figures of merit of two-dimensional high-performance liquid chromatography for the separation of antioxidant standards. The cumulative two-dimensional high-performance liquid chromatography peak area was calculated for 11 antioxidants by two different methods--the areas reported by the control software and by fitting the data with a Gaussian model; these methods were evaluated for precision and sensitivity. Both methods demonstrated excellent precision in regards to retention time in the second dimension (%RSD below 1.16%) and cumulative second dimension peak area (%RSD below 3.73% from the instrument software and 5.87% for the Gaussian method). Combining areas reported by the high-performance liquid chromatographic control software displayed superior limits of detection, in the order of 1 × 10(-6) M, almost an order of magnitude lower than the Gaussian method for some analytes. The introduction of the countergradient eliminated the strong solvent mismatch between dimensions, leading to a much improved peak shape and better detection limits for quantification. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems

    Directory of Open Access Journals (Sweden)

    DimitrisG. Stavrakoudis

    2012-04-01

    Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.

  12. Stable high efficiency two-dimensional perovskite solar cells via cesium doping

    KAUST Repository

    Zhang, Xu

    2017-08-15

    Two-dimensional (2D) organic-inorganic perovskites have recently emerged as one of the most important thin-film solar cell materials owing to their excellent environmental stability. The remaining major pitfall is their relatively poor photovoltaic performance in contrast to 3D perovskites. In this work we demonstrate cesium cation (Cs) doped 2D (BA)(MA)PbI perovskite solar cells giving a power conversion efficiency (PCE) as high as 13.7%, the highest among the reported 2D devices, with excellent humidity resistance. The enhanced efficiency from 12.3% (without Cs) to 13.7% (with 5% Cs) is attributed to perfectly controlled crystal orientation, an increased grain size of the 2D planes, superior surface quality, reduced trap-state density, enhanced charge-carrier mobility and charge-transfer kinetics. Surprisingly, it is found that the Cs doping yields superior stability for the 2D perovskite solar cells when subjected to a high humidity environment without encapsulation. The device doped using 5% Cs degrades only ca. 10% after 1400 hours of exposure in 30% relative humidity (RH), and exhibits significantly improved stability under heating and high moisture environments. Our results provide an important step toward air-stable and fully printable low dimensional perovskites as a next-generation renewable energy source.

  13. High-dimensional quantum key distribution with the entangled single-photon-added coherent state

    International Nuclear Information System (INIS)

    Wang, Yang; Bao, Wan-Su; Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei

    2017-01-01

    High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.

  14. Latent class models for joint analysis of disease prevalence and high-dimensional semicontinuous biomarker data.

    Science.gov (United States)

    Zhang, Bo; Chen, Zhen; Albert, Paul S

    2012-01-01

    High-dimensional biomarker data are often collected in epidemiological studies when assessing the association between biomarkers and human disease is of interest. We develop a latent class modeling approach for joint analysis of high-dimensional semicontinuous biomarker data and a binary disease outcome. To model the relationship between complex biomarker expression patterns and disease risk, we use latent risk classes to link the 2 modeling components. We characterize complex biomarker-specific differences through biomarker-specific random effects, so that different biomarkers can have different baseline (low-risk) values as well as different between-class differences. The proposed approach also accommodates data features that are common in environmental toxicology and other biomarker exposure data, including a large number of biomarkers, numerous zero values, and complex mean-variance relationship in the biomarkers levels. A Monte Carlo EM (MCEM) algorithm is proposed for parameter estimation. Both the MCEM algorithm and model selection procedures are shown to work well in simulations and applications. In applying the proposed approach to an epidemiological study that examined the relationship between environmental polychlorinated biphenyl (PCB) exposure and the risk of endometriosis, we identified a highly significant overall effect of PCB concentrations on the risk of endometriosis.

  15. Three-dimensional laparoscopy vs 2-dimensional laparoscopy with high-definition technology for abdominal surgery: a systematic review.

    Science.gov (United States)

    Fergo, Charlotte; Burcharth, Jakob; Pommergaard, Hans-Christian; Kildebro, Niels; Rosenberg, Jacob

    2017-01-01

    This systematic review investigates newer generation 3-dimensional (3D) laparoscopy vs 2-dimensional (2D) laparoscopy in terms of error rating, performance time, and subjective assessment as early comparisons have shown contradictory results due to technological shortcomings. This systematic review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Randomized controlled trials (RCTs) comparing newer generation 3D-laparoscopy with 2D-laparoscopy were included through searches in Pubmed, EMBASE, and Cochrane Central Register of Controlled Trials database. Of 643 articles, 13 RCTs were included, of which 2 were clinical trials. Nine of 13 trials (69%) and 10 of 13 trials (77%) found a significant reduction in performance time and error, respectively, with the use of 3D-laparoscopy. Overall, 3D-laparoscopy was found to be superior or equal to 2D-laparoscopy. All trials featuring subjective evaluation found a superiority of 3D-laparoscopy. More clinical RCTs are still awaited for the convincing results to be reproduced. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Rocky mountain spotted fever characterization and comparison to similar illnesses in a highly endemic area-Arizona, 2002-2011.

    Science.gov (United States)

    Traeger, Marc S; Regan, Joanna J; Humpherys, Dwight; Mahoney, Dianna L; Martinez, Michelle; Emerson, Ginny L; Tack, Danielle M; Geissler, Aimee; Yasmin, Seema; Lawson, Regina; Hamilton, Charlene; Williams, Velda; Levy, Craig; Komatsu, Kenneth; McQuiston, Jennifer H; Yost, David A

    2015-06-01

    Rocky Mountain spotted fever (RMSF) has emerged as a significant cause of morbidity and mortality since 2002 on tribal lands in Arizona. The explosive nature of this outbreak and the recognition of an unexpected tick vector, Rhipicephalus sanguineus, prompted an investigation to characterize RMSF in this unique setting and compare RMSF cases to similar illnesses. We compared medical records of 205 patients with RMSF and 175 with non-RMSF illnesses that prompted RMSF testing during 2002-2011 from 2 Indian reservations in Arizona. RMSF cases in Arizona occurred year-round and peaked later (July-September) than RMSF cases reported from other US regions. Cases were younger (median age, 11 years) and reported fever and rash less frequently, compared to cases from other US regions. Fever was present in 81% of cases but not significantly different from that in patients with non-RMSF illnesses. Classic laboratory abnormalities such as low sodium and platelet counts had small and subtle differences between cases and patients with non-RMSF illnesses. Imaging studies reflected the variability and complexity of the illness but proved unhelpful in clarifying the early diagnosis. RMSF epidemiology in this region appears different than RMSF elsewhere in the United States. No specific pattern of signs, symptoms, or laboratory findings occurred with enough frequency to consistently differentiate RMSF from other illnesses. Due to the nonspecific and variable nature of RMSF presentations, clinicians in this region should aggressively treat febrile illnesses and sepsis with doxycycline for suspected RMSF. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  17. Men and Women Exhibit Similar Acute Hypotensive Responses After Low, Moderate, or High-Intensity Plyometric Training.

    Science.gov (United States)

    Ramírez-Campillo, Rodrigo; Abad-Colil, Felipe; Vera, Maritza; Andrade, David C; Caniuqueo, Alexis; Martínez-Salazar, Cristian; Nakamura, Fábio Y; Arazi, Hamid; Cerda-Kohler, Hugo; Izquierdo, Mikel; Alonso-Martínez, Alicia M

    2016-01-01

    The aim of this study was to compare the acute effects of low-, moderate-, high-, and combined-intensity plyometric training on heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), and rate-pressure product (RPP) cardiovascular responses in male and female normotensive subjects. Fifteen (8 women) physically active normotensive subjects participated in this study (age 23.5 ± 2.6 years, body mass index 23.8 ± 2.3 kg · m(-2)). Using a randomized crossover design, trials were conducted with rest intervals of at least 48 hours. Each trial comprised 120 jumps, using boxes of 20, 30, and 40 cm for low, moderate, and high intensity, respectively. For combined intensity, the 3 height boxes were combined. Measurements were taken before and after (i.e., every 10 minutes for a period of 90 minutes) each trial. When data responses of men and women were combined, a mean reduction in SBP, DBP, and RPP was observed after all plyometric intensities. No significant differences were observed pre- or postexercise (at any time point) for HR, SBP, DBP, or RPP when low-, moderate-, high-, or combined-intensity trials were compared. No significant differences were observed between male and female subjects, except for a higher SBP reduction in women (-12%) compared with men (-7%) after high-intensity trial. Although there were minor differences across postexercise time points, collectively, the data demonstrated that all plyometric training intensities can induce an acute postexercise hypotensive effect in young normotensive male and female subjects.

  18. Multi-dimensional analysis of high resolution {gamma}-ray data

    Energy Technology Data Exchange (ETDEWEB)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J. [Strasbourg-1 Univ., 67 (France). Centre de Recherches Nucleaires

    1992-12-31

    A new generation of high resolution {gamma}-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold {gamma}-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8{pi} spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs.

  19. Three-dimensional bicontinuous nanoporous Au/polyaniline hybrid films for high-performance electrochemical supercapacitors

    Science.gov (United States)

    Lang, Xingyou; Zhang, Ling; Fujita, Takeshi; Ding, Yi; Chen, Mingwei

    2012-01-01

    We report three-dimensional bicontinuous nanoporous Au/polyaniline (PANI) composite films made by one-step electrochemical polymerization of PANI shell onto dealloyed nanoporous gold (NPG) skeletons for the applications in electrochemical supercapacitors. The NPG/PANI based supercapacitors exhibit ultrahigh volumetric capacitance (∼1500 F cm-3) and energy density (∼0.078 Wh cm-3), which are seven and four orders of magnitude higher than these of electrolytic capacitors, with the same power density up to ∼190 W cm-3. The outstanding capacitive performances result from a novel nanoarchitecture in which pseudocapacitive PANI shells are incorporated into pore channels of highly conductive NPG, making them promising candidates as electrode materials in supercapacitor devices combing high-energy storage densities with high-power delivery.

  20. Multi-dimensional analysis of high resolution γ-ray data

    International Nuclear Information System (INIS)

    Flibotte, S.; Huettmeier, U.J.; France, G. de; Haas, B.; Romain, P.; Theisen, Ch.; Vivien, J.P.; Zen, J.

    1992-01-01

    A new generation of high resolution γ-ray spectrometers capable of recording high-fold coincidence events with a large efficiency will soon be available. Algorithms are developed to analyze high-fold γ-ray coincidences. As a contribution to the software development associated with the EUROGAM spectrometer, the performances of computer codes designed to select multi-dimensional gates from 3-, 4- and 5-fold coincidence databases were tested. The tests were performed on events generated with a Monte Carlo simulation and also on real experimental triple data recorded with the 8π spectrometer and with a preliminary version of the EUROGAM array. (R.P.) 14 refs.; 3 figs.; 3 tabs

  1. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    International Nuclear Information System (INIS)

    Liu Jizhi; Chen Xingbi

    2009-01-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  2. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)

    2009-12-15

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  3. High-efficiency one-dimensional atom localization via two parallel standing-wave fields

    International Nuclear Information System (INIS)

    Wang, Zhiping; Wu, Xuqiang; Lu, Liang; Yu, Benli

    2014-01-01

    We present a new scheme of high-efficiency one-dimensional (1D) atom localization via measurement of upper state population or the probe absorption in a four-level N-type atomic system. By applying two classical standing-wave fields, the localization peak position and number, as well as the conditional position probability, can be easily controlled by the system parameters, and the sub-half-wavelength atom localization is also observed. More importantly, there is 100% detecting probability of the atom in the subwavelength domain when the corresponding conditions are satisfied. The proposed scheme may open up a promising way to achieve high-precision and high-efficiency 1D atom localization. (paper)

  4. High-resolution and high-throughput multichannel Fourier transform spectrometer with two-dimensional interferogram warping compensation

    Science.gov (United States)

    Watanabe, A.; Furukawa, H.

    2018-04-01

    The resolution of multichannel Fourier transform (McFT) spectroscopy is insufficient for many applications despite its extreme advantage of high throughput. We propose an improved configuration to realise both performance using a two-dimensional area sensor. For the spectral resolution, we obtained the interferogram of a larger optical path difference by shifting the area sensor without altering any optical components. The non-linear phase error of the interferometer was successfully corrected using a phase-compensation calculation. Warping compensation was also applied to realise a higher throughput to accumulate the signal between vertical pixels. Our approach significantly improved the resolution and signal-to-noise ratio by factors of 1.7 and 34, respectively. This high-resolution and high-sensitivity McFT spectrometer will be useful for detecting weak light signals such as those in non-invasive diagnosis.

  5. AucPR: An AUC-based approach using penalized regression for disease prediction with high-dimensional omics data

    OpenAIRE

    Yu, Wenbao; Park, Taesung

    2014-01-01

    Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...

  6. High-dimensional free-space optical communications based on orbital angular momentum coding

    Science.gov (United States)

    Zou, Li; Gu, Xiaofan; Wang, Le

    2018-03-01

    In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.

  7. On the sensitivity of dimensional stability of high density polyethylene on heating rate

    Directory of Open Access Journals (Sweden)

    2007-02-01

    Full Text Available Although high density polyethylene (HDPE is one of the most widely used industrial polymers, its application compared to its potential has been limited because of its low dimensional stability particularly at high temperature. Dilatometry test is considered as a method for examining thermal dimensional stability (TDS of the material. In spite of the importance of simulation of TDS of HDPE during dilatometry test it has not been paid attention by other investigators. Thus the main goal of this research is concentrated on simulation of TDS of HDPE. Also it has been tried to validate the simulation results and practical experiments. For this purpose the standard dilatometry test was done on the HDPE speci­mens. Secant coefficient of linear thermal expansion was computed from the test. Then by considering boundary conditions and material properties, dilatometry test has been simulated at different heating rates and the thermal strain versus temper­ature was calculated. The results showed that the simulation results and practical experiments were very close together.

  8. Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate

    Directory of Open Access Journals (Sweden)

    Seokhoon Kim

    2015-01-01

    Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.

  9. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  10. The validation and assessment of machine learning: a game of prediction from high-dimensional data.

    Directory of Open Access Journals (Sweden)

    Tune H Pers

    Full Text Available In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often implies that multiple methods are tested and compared on the same set of data. This is particularly difficult in situations that are prone to over-fitting where the number of subjects is low compared to the number of potential predictors. The article presents a game which provides some grounds for conducting a fair model comparison. Each player selects a modeling strategy for predicting individual response from potential predictors. A strictly proper scoring rule, bootstrap cross-validation, and a set of rules are used to make the results obtained with different strategies comparable. To illustrate the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively.

  11. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  12. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  13. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    Science.gov (United States)

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Quantum secret sharing based on modulated high-dimensional time-bin entanglement

    International Nuclear Information System (INIS)

    Takesue, Hiroki; Inoue, Kyo

    2006-01-01

    We propose a scheme for quantum secret sharing (QSS) that uses a modulated high-dimensional time-bin entanglement. By modulating the relative phase randomly by {0,π}, a sender with the entanglement source can randomly change the sign of the correlation of the measurement outcomes obtained by two distant recipients. The two recipients must cooperate if they are to obtain the sign of the correlation, which is used as a secret key. We show that our scheme is secure against intercept-and-resend (IR) and beam splitting attacks by an outside eavesdropper thanks to the nonorthogonality of high-dimensional time-bin entangled states. We also show that a cheating attempt based on an IR attack by one of the recipients can be detected by changing the dimension of the time-bin entanglement randomly and inserting two 'vacant' slots between the packets. Then, cheating attempts can be detected by monitoring the count rate in the vacant slots. The proposed scheme has better experimental feasibility than previously proposed entanglement-based QSS schemes

  15. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    Science.gov (United States)

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  16. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  17. Growing three-dimensional biomorphic graphene powders using naturally abundant diatomite templates towards high solution processability.

    Science.gov (United States)

    Chen, Ke; Li, Cong; Shi, Liurong; Gao, Teng; Song, Xiuju; Bachmatiuk, Alicja; Zou, Zhiyu; Deng, Bing; Ji, Qingqing; Ma, Donglin; Peng, Hailin; Du, Zuliang; Rümmeli, Mark Hermann; Zhang, Yanfeng; Liu, Zhongfan

    2016-11-07

    Mass production of high-quality graphene with low cost is the footstone for its widespread practical applications. We present herein a self-limited growth approach for producing graphene powders by a small-methane-flow chemical vapour deposition process on naturally abundant and industrially widely used diatomite (biosilica) substrates. Distinct from the chemically exfoliated graphene, thus-produced biomorphic graphene is highly crystallized with atomic layer-thickness controllability, structural designability and less noncarbon impurities. In particular, the individual graphene microarchitectures preserve a three-dimensional naturally curved surface morphology of original diatom frustules, effectively overcoming the interlayer stacking and hence giving excellent dispersion performance in fabricating solution-processible electrodes. The graphene films derived from as-made graphene powders, compatible with either rod-coating, or inkjet and roll-to-roll printing techniques, exhibit much higher electrical conductivity (∼110,700 S m -1 at 80% transmittance) than previously reported solution-based counterparts. This work thus puts forward a practical route for low-cost mass production of various powdery two-dimensional materials.

  18. Growing three-dimensional biomorphic graphene powders using naturally abundant diatomite templates towards high solution processability

    Science.gov (United States)

    Chen, Ke; Li, Cong; Shi, Liurong; Gao, Teng; Song, Xiuju; Bachmatiuk, Alicja; Zou, Zhiyu; Deng, Bing; Ji, Qingqing; Ma, Donglin; Peng, Hailin; Du, Zuliang; Rümmeli, Mark Hermann; Zhang, Yanfeng; Liu, Zhongfan

    2016-11-01

    Mass production of high-quality graphene with low cost is the footstone for its widespread practical applications. We present herein a self-limited growth approach for producing graphene powders by a small-methane-flow chemical vapour deposition process on naturally abundant and industrially widely used diatomite (biosilica) substrates. Distinct from the chemically exfoliated graphene, thus-produced biomorphic graphene is highly crystallized with atomic layer-thickness controllability, structural designability and less noncarbon impurities. In particular, the individual graphene microarchitectures preserve a three-dimensional naturally curved surface morphology of original diatom frustules, effectively overcoming the interlayer stacking and hence giving excellent dispersion performance in fabricating solution-processible electrodes. The graphene films derived from as-made graphene powders, compatible with either rod-coating, or inkjet and roll-to-roll printing techniques, exhibit much higher electrical conductivity (~110,700 S m-1 at 80% transmittance) than previously reported solution-based counterparts. This work thus puts forward a practical route for low-cost mass production of various powdery two-dimensional materials.

  19. Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations

    Science.gov (United States)

    Garrett, Karen A.; Allison, David B.

    2015-01-01

    Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106

  20. Challenges and approaches to statistical design and inference in high-dimensional investigations.

    Science.gov (United States)

    Gadbury, Gary L; Garrett, Karen A; Allison, David B

    2009-01-01

    Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.

  1. Stable Graphene-Two-Dimensional Multiphase Perovskite Heterostructure Phototransistors with High Gain.

    Science.gov (United States)

    Shao, Yuchuan; Liu, Ye; Chen, Xiaolong; Chen, Chen; Sarpkaya, Ibrahim; Chen, Zhaolai; Fang, Yanjun; Kong, Jaemin; Watanabe, Kenji; Taniguchi, Takashi; Taylor, André; Huang, Jinsong; Xia, Fengnian

    2017-12-13

    Recently, two-dimensional (2D) organic-inorganic perovskites emerged as an alternative material for their three-dimensional (3D) counterparts in photovoltaic applications with improved moisture resistance. Here, we report a stable, high-gain phototransistor consisting of a monolayer graphene on hexagonal boron nitride (hBN) covered by a 2D multiphase perovskite heterostructure, which was realized using a newly developed two-step ligand exchange method. In this phototransistor, the multiple phases with varying bandgap in 2D perovskite thin films are aligned for the efficient electron-hole pair separation, leading to a high responsivity of ∼10 5 A W -1 at 532 nm. Moreover, the designed phase alignment method aggregates more hydrophobic butylammonium cations close to the upper surface of the 2D perovskite thin film, preventing the permeation of moisture and enhancing the device stability dramatically. In addition, faster photoresponse and smaller 1/f noise observed in the 2D perovskite phototransistors indicate a smaller density of deep hole traps in the 2D perovskite thin film compared with their 3D counterparts. These desirable properties not only improve the performance of the phototransistor, but also provide a new direction for the future enhancement of the efficiency of 2D perovskite photovoltaics.

  2. Thermophysical and Mechanical Properties of Granite and Its Effects on Borehole Stability in High Temperature and Three-Dimensional Stress

    Directory of Open Access Journals (Sweden)

    Wang Yu

    2014-01-01

    Full Text Available When exploiting the deep resources, the surrounding rock readily undergoes the hole shrinkage, borehole collapse, and loss of circulation under high temperature and high pressure. A series of experiments were conducted to discuss the compressional wave velocity, triaxial strength, and permeability of granite cored from 3500 meters borehole under high temperature and three-dimensional stress. In light of the coupling of temperature, fluid, and stress, we get the thermo-fluid-solid model and governing equation. ANSYS-APDL was also used to stimulate the temperature influence on elastic modulus, Poisson ratio, uniaxial compressive strength, and permeability. In light of the results, we establish a temperature-fluid-stress model to illustrate the granite’s stability. The compressional wave velocity and elastic modulus, decrease as the temperature rises, while poisson ratio and permeability of granite increase. The threshold pressure and temperature are 15 MPa and 200°C, respectively. The temperature affects the fracture pressure more than the collapse pressure, but both parameters rise with the increase of temperature. The coupling of thermo-fluid-solid, greatly impacting the borehole stability, proves to be a good method to analyze similar problems of other formations.

  3. Collective oscillations of twin boundaries in high temperature superconductors as an acoustic analogue of two-dimensional plasmons

    International Nuclear Information System (INIS)

    Kosevich, Yu.A.; Syrkin, E.S.

    1990-06-01

    Low frequency collective oscillations in a superlattice consisting of alternating highly anisotropic layers are considered. Such superstructure may be formed in the ferroelastic near the structural phase transition by alternation of twins. For the surface waves, propagating along the layers, the conditions and the range of existence of those with the dispersion law ω∼K 1/2 , characteristics for two-dimensional plasmons, have been analyzed for a solid-state system with consideration for elastic anisotropy and retardation of acoustic waves. Such excitations ('dyadons') were used in an attempt to explain the anomalies of low temperature thermodynamic and kinetic characteristics of high-T c superconductors. We have shown that the similarity of the densities of the matching phases and the retardation of elastic waves in the crystal narrow the range of existence of dyadons, but high elastic anisotropy of the solid phases enlarges the range of existence of such excitations in solid-state systems. The example of possible crystalline geometry of the phase matching, for which there arise collective excitations of the type under consideration, is found. For transverse and longitudinal waves propagating across the layers, the existence is proved of low frequency acoustic branches separated by a wide gap from the nearest optical branches. (author). 18 refs

  4. Thermophysical and mechanical properties of granite and its effects on borehole stability in high temperature and three-dimensional stress.

    Science.gov (United States)

    Wang, Yu; Liu, Bao-lin; Zhu, Hai-yan; Yan, Chuan-liang; Li, Zhi-jun; Wang, Zhi-qiao

    2014-01-01

    When exploiting the deep resources, the surrounding rock readily undergoes the hole shrinkage, borehole collapse, and loss of circulation under high temperature and high pressure. A series of experiments were conducted to discuss the compressional wave velocity, triaxial strength, and permeability of granite cored from 3500 meters borehole under high temperature and three-dimensional stress. In light of the coupling of temperature, fluid, and stress, we get the thermo-fluid-solid model and governing equation. ANSYS-APDL was also used to stimulate the temperature influence on elastic modulus, Poisson ratio, uniaxial compressive strength, and permeability. In light of the results, we establish a temperature-fluid-stress model to illustrate the granite's stability. The compressional wave velocity and elastic modulus, decrease as the temperature rises, while poisson ratio and permeability of granite increase. The threshold pressure and temperature are 15 MPa and 200 °C, respectively. The temperature affects the fracture pressure more than the collapse pressure, but both parameters rise with the increase of temperature. The coupling of thermo-fluid-solid, greatly impacting the borehole stability, proves to be a good method to analyze similar problems of other formations.

  5. Molecular similarity measures.

    Science.gov (United States)

    Maggiora, Gerald M; Shanmugasundaram, Veerabahu

    2011-01-01

    Molecular similarity is a pervasive concept in chemistry. It is essential to many aspects of chemical reasoning and analysis and is perhaps the fundamental assumption underlying medicinal chemistry. Dissimilarity, the complement of similarity, also plays a major role in a growing number of applications of molecular diversity in combinatorial chemistry, high-throughput screening, and related fields. How molecular information is represented, called the representation problem, is important to the type of molecular similarity analysis (MSA) that can be carried out in any given situation. In this work, four types of mathematical structure are used to represent molecular information: sets, graphs, vectors, and functions. Molecular similarity is a pairwise relationship that induces structure into sets of molecules, giving rise to the concept of chemical space. Although all three concepts - molecular similarity, molecular representation, and chemical space - are treated in this chapter, the emphasis is on molecular similarity measures. Similarity measures, also called similarity coefficients or indices, are functions that map pairs of compatible molecular representations that are of the same mathematical form into real numbers usually, but not always, lying on the unit interval. This chapter presents a somewhat pedagogical discussion of many types of molecular similarity measures, their strengths and limitations, and their relationship to one another. An expanded account of the material on chemical spaces presented in the first edition of this book is also provided. It includes a discussion of the topography of activity landscapes and the role that activity cliffs in these landscapes play in structure-activity studies.

  6. Blood pressure control is similar in treated hypertensive patients with optimal or with high-normal albuminuria.

    Science.gov (United States)

    Oliveras, Anna; Armario, Pedro; Lucas, Silvia; de la Sierra, Alejandro

    2014-09-01

    Although elevated urinary albumin excretion (UAE) is associated with cardiovascular prognosis and high blood pressure (BP), it is unknown whether differences in BP control could also exist between patients with different grades of UAE, even in the normal range. We sought to explore the association between different levels of UAE and BP control in treated hypertensive patients. A cohort of 1,200 treated hypertensive patients was evaluated. Clinical data, including 2 office BP measurements and UAE averaged from 2 samples, were recorded. Albuminuria was categorized into 4 groups: G0 (UAE <10mg/g), G1 (UAE 10-29 mg/g), G2 (UAE 30-299 mg/g), and G3 (UAE ≥300 mg/g). Forty-three percent of patients had systolic BP ≥140 mm Hg and/or diastolic BP ≥90 mm Hg. Median UAE was significantly higher (20.3 vs. 11.7 mg/g; P < 0.001) in these patients than in controlled hypertensive patients (BP<140/90 mm Hg). When UAE was categorized into the 4 groups, there were differences in BP control among groups (P < 0.001).The proportion of noncontrolled patients in G2 (52.3%) was significantly higher than in G0 (36.8%) and G1 (41.5%) (P < 0.01 and P < 0.05, respectively). Importantly, no significant differences were observed between G0 and G1 (P = 0.18) or between G2 and G3 (P = 0.48). With G0 as the reference group, the odds ratio of lack of BP control for the G2 group after adjustment for confounders was 1.40 (95% confidence interval =1.16-1.68; P < 0.001). Lack of BP control is more prevalent among patients with microalbuminuria than in patients with normoalbuminuria. No significant difference was seen between patients with optimal or high-normal UAE. © American Journal of Hypertension, Ltd 2014. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. A Near-linear Time Approximation Algorithm for Angle-based Outlier Detection in High-dimensional Data

    DEFF Research Database (Denmark)

    Pham, Ninh Dang; Pagh, Rasmus

    2012-01-01

    projection-based technique that is able to estimate the angle-based outlier factor for all data points in time near-linear in the size of the data. Also, our approach is suitable to be performed in parallel environment to achieve a parallel speedup. We introduce a theoretical analysis of the quality...... neighbor are deteriorated in high-dimensional data. Following up on the work of Kriegel et al. (KDD '08), we investigate the use of angle-based outlier factor in mining high-dimensional outliers. While their algorithm runs in cubic time (with a quadratic time heuristic), we propose a novel random......Outlier mining in d-dimensional point sets is a fundamental and well studied data mining task due to its variety of applications. Most such applications arise in high-dimensional domains. A bottleneck of existing approaches is that implicit or explicit assessments on concepts of distance or nearest...

  8. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    Science.gov (United States)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  9. Are high energy heavy ion collisions similar to a little bang, or just a very nice firework?

    Energy Technology Data Exchange (ETDEWEB)

    Shuryak, E.V. [State University of New York, NY (United States)

    2001-07-01

    The talk is a brief overview of recent progress in heavy ion physics, with emphasis on applications of macroscopic approaches. The central issues are whether the systems exhibit macroscopic behavior we need in order to interpret it as excited hadronic matter, and, if so, what is its effective Equation of State (EoS). This, in turn, depends on the collision rate in matter: we think we understand in hadronic matter near freeze-out, but certainly not at earlier stages of the collisions. Still (and this is about the most important statement we make) there is no indication that is not high enough, so that a hydro description of excited matter be possible. More specifically, we concentrate on such properties of the produced excited system as collective flow, particle composition and fluctuations relaxation are ultimately a measure of a collision rate we would like to know. We also try to explain what exactly are the expected differences between collisions at AGS/SPS and RHIC energies. (author)

  10. Are high energy heavy ion collisions similar to a little bang, or just a very nice firework?

    International Nuclear Information System (INIS)

    Shuryak, E.V.

    2001-01-01

    The talk is a brief overview of recent progress in heavy ion physics, with emphasis on applications of macroscopic approaches. The central issues are whether the systems exhibit macroscopic behavior we need in order to interpret it as excited hadronic matter, and, if so, what is its effective Equation of State (EoS). This, in turn, depends on the collision rate in matter: we think we understand in hadronic matter near freeze-out, but certainly not at earlier stages of the collisions. Still (and this is about the most important statement we make) there is no indication that is not high enough, so that a hydro description of excited matter be possible. More specifically, we concentrate on such properties of the produced excited system as collective flow, particle composition and fluctuations relaxation are ultimately a measure of a collision rate we would like to know. We also try to explain what exactly are the expected differences between collisions at AGS/SPS and RHIC energies. (author)

  11. Are High Energy Heavy Ion Collisions similar to a Little Bang, or just a very nice Firework?

    Science.gov (United States)

    Shuryak, E. V.

    2001-09-01

    The talk is a brief overview of recent progress in heavy ion physics, with emphasis on applications of macroscopic approaches. The central issues are whether the systems exhibit macroscopic behavior we need in order to interpret it as excited hadronic matter, and, if so, what is its effective Equation of State (EoS). This, in turn, depends on the collision rate in matter: we think we understand in hadronic matter near freeze-out, but certainly not at earlier stages of the collisions. Still (and this is about the most important statement we make) there is no indication that it is not high enough, so that a hydro description of excited matter be possible. More specifically, we concentrate on such properties of the produced excited system as collective flow, particle composition and fluctuations. Note that both a generation of a pressure and the rate of fluctuation relaxation are ultimately a measure of a collision rate we would like to know. We also try to explain what exactly are the expected differences between collisions at AGS/SPS and RHIC energies.

  12. High-speed two-dimensional laser scanner based on Bragg gratings stored in photothermorefractive glass.

    Science.gov (United States)

    Yaqoob, Zahid; Arain, Muzammil A; Riza, Nabeel A

    2003-09-10

    A high-speed free-space wavelength-multiplexed optical scanner with high-speed wavelength selection coupled with narrowband volume Bragg gratings stored in photothermorefractive (PTR) glass is reported. The proposed scanner with no moving parts has a modular design with a wide angular scan range, accurate beam pointing, low scanner insertion loss, and two-dimensional beam scan capabilities. We present a complete analysis and design procedure for storing multiple tilted Bragg-grating structures in a single PTR glass volume (for normal incidence) in an optimal fashion. Because the scanner design is modular, many PTR glass volumes (each having multiple tilted Bragg-grating structures) can be stacked together, providing an efficient throughput with operations in both the visible and the infrared (IR) regions. A proof-of-concept experimental study is conducted with four Bragg gratings in independent PTR glass plates, and both visible and IR region scanner operations are demonstrated.

  13. Penalized estimation for competing risks regression with applications to high-dimensional covariates

    DEFF Research Database (Denmark)

    Ambrogi, Federico; Scheike, Thomas H.

    2016-01-01

    of competing events. The direct binomial regression model of Scheike and others (2008. Predicting cumulative incidence probability by direct binomial regression. Biometrika 95: (1), 205-220) is reformulated in a penalized framework to possibly fit a sparse regression model. The developed approach is easily...... Research 19: (1), 29-51), the research regarding competing risks is less developed (Binder and others, 2009. Boosting for high-dimensional time-to-event data with competing risks. Bioinformatics 25: (7), 890-896). The aim of this work is to consider how to do penalized regression in the presence...... implementable using existing high-performance software to do penalized regression. Results from simulation studies are presented together with an application to genomic data when the endpoint is progression-free survival. An R function is provided to perform regularized competing risks regression according...

  14. Graphene quantum dots-three-dimensional graphene composites for high-performance supercapacitors.

    Science.gov (United States)

    Chen, Qing; Hu, Yue; Hu, Chuangang; Cheng, Huhu; Zhang, Zhipan; Shao, Huibo; Qu, Liangti

    2014-09-28

    Graphene quantum dots (GQDs) have been successfully deposited onto the three-dimensional graphene (3DG) by a benign electrochemical method and the ordered 3DG structure remains intact after the uniform deposition of GQDs. In addition, the capacitive properties of the as-formed GQD-3DG composites are evaluated in symmetrical supercapacitors. It is found that the supercapacitor fabricated from the GQD-3DG composite is highly stable and exhibits a high specific capacitance of 268 F g(-1), representing a more than 90% improvement over that of the supercapacitor made from pure 3DG electrodes (136 F g(-1)). Owing to the convenience of the current method, it can be further used in other well-defined electrode materials, such as carbon nanotubes, carbon aerogels and conjugated polymers to improve the performance of the supercapacitors.

  15. A High Sensitivity Three-Dimensional-Shape Sensing Patch Prepared by Lithography and Inkjet Printing

    Directory of Open Access Journals (Sweden)

    Cheng-Yao Lo

    2012-03-01

    Full Text Available A process combining conventional photolithography and a novel inkjet printing method for the manufacture of high sensitivity three-dimensional-shape (3DS sensing patches was proposed and demonstrated. The supporting curvature ranges from 1.41 to 6.24 ´ 10−2 mm−1 and the sensing patch has a thickness of less than 130 μm and 20 ´ 20 mm2 dimensions. A complete finite element method (FEM model with simulation results was calculated and performed based on the buckling of columns and the deflection equation. The results show high compatibility of the drop-on-demand (DOD inkjet printing with photolithography and the interferometer design also supports bi-directional detection of deformation. The 3DS sensing patch can be operated remotely without any power consumption. It provides a novel and alternative option compared with other optical curvature sensors.

  16. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Science.gov (United States)

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  17. Three-dimensional analysis of harmonic generation in high-gain free-electron lasers

    International Nuclear Information System (INIS)

    Huang, Zhirong; Kim, Kwang-Je

    2000-01-01

    In a high-gain free-electron laser (FEL) employing a planar undulator, strong bunching at the fundamental wavelength can drive substantial bunching and power levels at the harmonic frequencies. In this paper we investigate the three-dimensional evolution of harmonic radiation based on the coupled Maxwell-Klimontovich equations that take into account nonlinear harmonic interactions. Each harmonic field is a sum of a linear amplification term and a term driven by nonlinear harmonic interactions. After a certain stage of exponential growth, the dominant nonlinear term is determined by interactions of the lower nonlinear harmonics and the fundamental radiation. As a result, the gain length, transverse profile, and temporal structure of the first few harmonics are eventually governed by those of the fundamental. Transversely coherent third-harmonic radiation power is found to approach 1% of the fundamental power level for current high-gain FEL projects

  18. High-velocity two-phase flow two-dimensional modeling

    International Nuclear Information System (INIS)

    Mathes, R.; Alemany, A.; Thilbault, J.P.

    1995-01-01

    The two-phase flow in the nozzle of a LMMHD (liquid metal magnetohydrodynamic) converter has been studied numerically and experimentally. A two-dimensional model for two-phase flow has been developed including the viscous terms (dragging and turbulence) and the interfacial mass, momentum and energy transfer between the phases. The numerical results were obtained by a finite volume method based on the SIMPLE algorithm. They have been verified by an experimental facility using air-water as a simulation pair and a phase Doppler particle analyzer for velocity and droplet size measurement. The numerical simulation of a lithium-cesium high-temperature pair showed that a nearly homogeneous and isothermal expansion of the two phases is possible with small pressure losses and high kinetic efficiencies. In the throat region a careful profiling is necessary to reduce the inertial effects on the liquid velocity field

  19. High-resolution liquid patterns via three-dimensional droplet shape control.

    Science.gov (United States)

    Raj, Rishi; Adera, Solomon; Enright, Ryan; Wang, Evelyn N

    2014-09-25

    Understanding liquid dynamics on surfaces can provide insight into nature's design and enable fine manipulation capability in biological, manufacturing, microfluidic and thermal management applications. Of particular interest is the ability to control the shape of the droplet contact area on the surface, which is typically circular on a smooth homogeneous surface. Here, we show the ability to tailor various droplet contact area shapes ranging from squares, rectangles, hexagons, octagons, to dodecagons via the design of the structure or chemical heterogeneity on the surface. We simultaneously obtain the necessary physical insights to develop a universal model for the three-dimensional droplet shape by characterizing the droplet side and top profiles. Furthermore, arrays of droplets with controlled shapes and high spatial resolution can be achieved using this approach. This liquid-based patterning strategy promises low-cost fabrication of integrated circuits, conductive patterns and bio-microarrays for high-density information storage and miniaturized biochips and biosensors, among others.

  20. Three-dimensional Force and Kinematic Interactions in V1 Skating at High Speeds.

    Science.gov (United States)

    Stöggl, Thomas; Holmberg, Hans-Christer

    2015-06-01

    To describe the detailed kinetics and kinematics associated with use of the V1 skating technique at high skiing speeds and to identify factors that predict performance. Fifteen elite male cross-country skiers performed an incremental roller-skiing speed test (Vpeak) on a treadmill using the V1 skating technique. Pole and plantar forces and whole-body kinematics were monitored at four submaximal speeds. The propulsive force of the "strong side" pole was greater than that of the "weak side" (P skating at high speeds. The faster skiers exhibit more symmetric leg motion on the "strong" and "weak" sides, as well as more synchronized poling. With respect to methods, the pressure insoles and three-dimensional kinematics in combination with the leg push-off model described here can easily be applied to all skating techniques, aiding in the evaluation of skiing techniques and comparison of effectiveness.

  1. Newcastle Disease Viruses Causing Recent Outbreaks Worldwide Show Unexpectedly High Genetic Similarity to Historical Virulent Isolates from the 1940s

    Science.gov (United States)

    Dimitrov, Kiril M.; Lee, Dong-Hun; Williams-Coplin, Dawn; Olivier, Timothy L.; Miller, Patti J.

    2016-01-01

    Virulent strains of Newcastle disease virus (NDV) cause Newcastle disease (ND), a devastating disease of poultry and wild birds. Phylogenetic analyses clearly distinguish historical isolates (obtained prior to 1960) from currently circulating viruses of class II genotypes V, VI, VII, and XII through XVIII. Here, partial and complete genomic sequences of recent virulent isolates of genotypes II and IX from China, Egypt, and India were found to be nearly identical to those of historical viruses isolated in the 1940s. Phylogenetic analysis, nucleotide distances, and rates of change demonstrate that these recent isolates have not evolved significantly from the most closely related ancestors from the 1940s. The low rates of change for these virulent viruses (7.05 × 10−5 and 2.05 × 10−5 per year, respectively) and the minimal genetic distances existing between these and historical viruses (0.3 to 1.2%) of the same genotypes indicate an unnatural origin. As with any other RNA virus, Newcastle disease virus is expected to evolve naturally; thus, these findings suggest that some recent field isolates should be excluded from evolutionary studies. Furthermore, phylogenetic analyses show that these recent virulent isolates are more closely related to virulent strains isolated during the 1940s, which have been and continue to be used in laboratory and experimental challenge studies. Since the preservation of viable viruses in the environment for over 6 decades is highly unlikely, it is possible that the source of some of the recent virulent viruses isolated from poultry and wild birds might be laboratory viruses. PMID:26888902

  2. High dimensional and high resolution pulse sequences for backbone resonance assignment of intrinsically disordered proteins

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor, E-mail: kozmin@chem.uw.edu.pl [University of Warsaw, Faculty of Chemistry (Poland); Sanderova, Hana; Krasny, Libor [Institute of Microbiology, Academy of Sciences of the Czech Republic, Laboratory of Molecular Genetics of Bacteria, Department of Bacteriology (Czech Republic)

    2012-04-15

    Four novel 5D (HACA(N)CONH, HNCOCACB, (HACA)CON(CA)CONH, (H)NCO(NCA)CONH), and one 6D ((H)NCO(N)CACONH) NMR pulse sequences are proposed. The new experiments employ non-uniform sampling that enables achieving high resolution in indirectly detected dimensions. The experiments facilitate resonance assignment of intrinsically disordered proteins. The novel pulse sequences were successfully tested using {delta} subunit (20 kDa) of Bacillus subtilis RNA polymerase that has an 81-amino acid disordered part containing various repetitive sequences.

  3. Reliable Exfoliation of Large-Area High-Quality Flakes of Graphene and Other Two-Dimensional Materials.

    Science.gov (United States)

    Huang, Yuan; Sutter, Eli; Shi, Norman N; Zheng, Jiabao; Yang, Tianzhong; Englund, Dirk; Gao, Hong-Jun; Sutter, Peter

    2015-11-24

    Mechanical exfoliation has been a key enabler of the exploration of the properties of two-dimensional materials, such as graphene, by providing routine access to high-quality material. The original exfoliation method, which remained largely unchanged during the past decade, provides relatively small flakes with moderate yield. Here, we report a modified approach for exfoliating thin monolayer and few-layer flakes from layered crystals. Our method introduces two process steps that enhance and homogenize the adhesion force between the outermost sheet in contact with a substrate: Prior to exfoliation, ambient adsorbates are effectively removed from the substrate by oxygen plasma cleaning, and an additional heat treatment maximizes the uniform contact area at the interface between the source crystal and the substrate. For graphene exfoliation, these simple process steps increased the yield and the area of the transferred flakes by more than 50 times compared to the established exfoliation methods. Raman and AFM characterization shows that the graphene flakes are of similar high quality as those obtained in previous reports. Graphene field-effect devices were fabricated and measured with back-gating and solution top-gating, yielding mobilities of ∼4000 and 12,000 cm(2)/(V s), respectively, and thus demonstrating excellent electrical properties. Experiments with other layered crystals, e.g., a bismuth strontium calcium copper oxide (BSCCO) superconductor, show enhancements in exfoliation yield and flake area similar to those for graphene, suggesting that our modified exfoliation method provides an effective way for producing large area, high-quality flakes of a wide range of 2D materials.

  4. High dimensional biological data retrieval optimization with NoSQL technology

    Science.gov (United States)

    2014-01-01

    Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data

  5. High dimensional biological data retrieval optimization with NoSQL technology.

    Science.gov (United States)

    Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike

    2014-01-01

    High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating

  6. Nano-engineering of three-dimensional core/shell nanotube arrays for high performance supercapacitors

    Science.gov (United States)

    Grote, Fabian; Wen, Liaoyong; Lei, Yong

    2014-06-01

    Large-scale arrays of core/shell nanostructures are highly desirable to enhance the performance of supercapacitors. Here we demonstrate an innovative template-based fabrication technique with high structural controllability, which is capable of synthesizing well-ordered three-dimensional arrays of SnO2/MnO2 core/shell nanotubes for electrochemical energy storage in supercapacitor applications. The SnO2 core is fabricated by atomic layer deposition and provides a highly electrical conductive matrix. Subsequently a thin MnO2 shell is coated by electrochemical deposition onto the SnO2 core, which guarantees a short ion diffusion length within the shell. The core/shell structure shows an excellent electrochemical performance with a high specific capacitance of 910 F g-1 at 1 A g-1 and a good rate capability of remaining 217 F g-1 at 50 A g-1. These results shall pave the way to realize aqueous based asymmetric supercapacitors with high specific power and high specific energy.

  7. Predicting Future High-Cost Schizophrenia Patients Using High-Dimensional Administrative Data

    Directory of Open Access Journals (Sweden)

    Yajuan Wang

    2017-06-01

    Full Text Available BackgroundThe burden of serious and persistent mental illness such as schizophrenia is substantial and requires health-care organizations to have adequate risk adjustment models to effectively allocate their resources to managing patients who are at the greatest risk. Currently available models underestimate health-care costs for those with mental or behavioral health conditions.ObjectivesThe study aimed to develop and evaluate predictive models for identification of future high-cost schizophrenia patients using advanced supervised machine learning methods.MethodsThis was a retrospective study using a payer administrative database. The study cohort consisted of 97,862 patients diagnosed with schizophrenia (ICD9 code 295.* from January 2009 to June 2014. Training (n = 34,510 and study evaluation (n = 30,077 cohorts were derived based on 12-month observation and prediction windows (PWs. The target was average total cost/patient/month in the PW. Three models (baseline, intermediate, final were developed to assess the value of different variable categories for cost prediction (demographics, coverage, cost, health-care utilization, antipsychotic medication usage, and clinical conditions. Scalable orthogonal regression, significant attribute selection in high dimensions method, and random forests regression were used to develop the models. The trained models were assessed in the evaluation cohort using the regression R2, patient classification accuracy (PCA, and cost accuracy (CA. The model performance was compared to the Centers for Medicare & Medicaid Services Hierarchical Condition Categories (CMS-HCC model.ResultsAt top 10% cost cutoff, the final model achieved 0.23 R2, 43% PCA, and 63% CA; in contrast, the CMS-HCC model achieved 0.09 R2, 27% PCA with 45% CA. The final model and the CMS-HCC model identified 33 and 22%, respectively, of total cost at the top 10% cost cutoff.ConclusionUsing advanced feature selection leveraging detailed

  8. High-Level Heteroatom Doped Two-Dimensional Carbon Architectures for Highly Efficient Lithium-Ion Storage

    Directory of Open Access Journals (Sweden)

    Zhijie Wang

    2018-04-01

    Full Text Available In this work, high-level heteroatom doped two-dimensional hierarchical carbon architectures (H-2D-HCA are developed for highly efficient Li-ion storage applications. The achieved H-2D-HCA possesses a hierarchical 2D morphology consisting of tiny carbon nanosheets vertically grown on carbon nanoplates and containing a hierarchical porosity with multiscale pore size. More importantly, the H-2D-HCA shows abundant heteroatom functionality, with sulfur (S doping of 0.9% and nitrogen (N doping of as high as 15.5%, in which the electrochemically active N accounts for 84% of total N heteroatoms. In addition, the H-2D-HCA also has an expanded interlayer distance of 0.368 nm. When used as lithium-ion battery anodes, it shows excellent Li-ion storage performance. Even at a high current density of 5 A g−1, it still delivers a high discharge capacity of 329 mA h g−1 after 1,000 cycles. First principle calculations verifies that such unique microstructure characteristics and high-level heteroatom doping nature can enhance Li adsorption stability, electronic conductivity and Li diffusion mobility of carbon nanomaterials. Therefore, the H-2D-HCA could be promising candidates for next-generation LIB anodes.

  9. COSMO-PAFOG: Three-dimensional fog forecasting with the high-resolution COSMO-model

    Science.gov (United States)

    Hacker, Maike; Bott, Andreas

    2017-04-01

    The presence of fog can have critical impact on shipping, aviation and road traffic increasing the risk of serious accidents. Besides these negative impacts of fog, in arid regions fog is explored as a supplementary source of water for human settlements. Thus the improvement of fog forecasts holds immense operational value. The aim of this study is the development of an efficient three-dimensional numerical fog forecast model based on a mesoscale weather prediction model for the application in the Namib region. The microphysical parametrization of the one-dimensional fog forecast model PAFOG (PArameterized FOG) is implemented in the three-dimensional nonhydrostatic mesoscale weather prediction model COSMO (COnsortium for Small-scale MOdeling) developed and maintained by the German Meteorological Service. Cloud water droplets are introduced in COSMO as prognostic variables, thus allowing a detailed description of droplet sedimentation. Furthermore, a visibility parametrization depending on the liquid water content and the droplet number concentration is implemented. The resulting fog forecast model COSMO-PAFOG is run with kilometer-scale horizontal resolution. In vertical direction, we use logarithmically equidistant layers with 45 of 80 layers in total located below 2000 m. Model results are compared to satellite observations and synoptic observations of the German Meteorological Service for a domain in the west of Germany, before the model is adapted to the geographical and climatological conditions in the Namib desert. COSMO-PAFOG is able to represent the horizontal structure of fog patches reasonably well. Especially small fog patches typical of radiation fog can be simulated in agreement with observations. Ground observations of temperature are also reproduced. Simulations without the PAFOG microphysics yield unrealistically high liquid water contents. This in turn reduces the radiative cooling of the ground, thus inhibiting nocturnal temperature decrease. The

  10. High-dimensional gene expression profiling studies in high and low responders to primary smallpox vaccination.

    Science.gov (United States)

    Haralambieva, Iana H; Oberg, Ann L; Dhiman, Neelam; Ovsyannikova, Inna G; Kennedy, Richard B; Grill, Diane E; Jacobson, Robert M; Poland, Gregory A

    2012-11-15

    The mechanisms underlying smallpox vaccine-induced variations in immune responses are not well understood, but are of considerable interest to a deeper understanding of poxvirus immunity and correlates of protection. We assessed transcriptional messenger RNA expression changes in 197 recipients of primary smallpox vaccination representing the extremes of humoral and cellular immune responses. The 20 most significant differentially expressed genes include a tumor necrosis factor-receptor superfamily member, an interferon (IFN) gene, a chemokine gene, zinc finger protein genes, nuclear factors, and histones (P ≤ 1.06E(-20), q ≤ 2.64E(-17)). A pathway analysis identified 4 enriched pathways with cytokine production by the T-helper 17 subset of CD4+ T cells being the most significant pathway (P = 3.42E(-05)). Two pathways (antiviral actions of IFNs, P = 8.95E(-05); and IFN-α/β signaling pathway, P = 2.92E(-04)), integral to innate immunity, were enriched when comparing high with low antibody responders (false discovery rate, < 0.05). Genes related to immune function and transcription (TLR8, P = .0002; DAPP1, P = .0003; LAMP3, P = 9.96E(-05); NR4A2, P ≤ .0002; EGR3, P = 4.52E(-05)), and other genes with a possible impact on immunity (LNPEP, P = 3.72E(-05); CAPRIN1, P = .0001; XRN1, P = .0001), were found to be expressed differentially in high versus low antibody responders. We identified novel and known immunity-related genes and pathways that may account for differences in immune response to smallpox vaccination.

  11. Protein structural similarity search by Ramachandran codes

    Directory of Open Access Journals (Sweden)

    Chang Chih-Hung

    2007-08-01

    Full Text Available Abstract Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation. SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era.

  12. Analysis of HIV-1 intersubtype recombination breakpoints suggests region with high pairing probability may be a more fundamental factor than sequence similarity affecting HIV-1 recombination.

    Science.gov (United States)

    Jia, Lei; Li, Lin; Gui, Tao; Liu, Siyang; Li, Hanping; Han, Jingwan; Guo, Wei; Liu, Yongjian; Li, Jingyun

    2016-09-21

    With increasing data on HIV-1, a more relevant molecular model describing mechanism details of HIV-1 genetic recombination usually requires upgrades. Currently an incomplete structural understanding of the copy choice mechanism along with several other issues in the field that lack elucidation led us to perform an analysis of the correlation between breakpoint distributions and (1) the probability of base pairing, and (2) intersubtype genetic similarity to further explore structural mechanisms. Near full length sequences of URFs from Asia, Europe, and Africa (one sequence/patient), and representative sequences of worldwide CRFs were retrieved from the Los Alamos HIV database. Their recombination patterns were analyzed by jpHMM in detail. Then the relationships between breakpoint distributions and (1) the probability of base pairing, and (2) intersubtype genetic similarities were investigated. Pearson correlation test showed that all URF groups and the CRF group exhibit the same breakpoint distribution pattern. Additionally, the Wilcoxon two-sample test indicated a significant and inexplicable limitation of recombination in regions with high pairing probability. These regions have been found to be strongly conserved across distinct biological states (i.e., strong intersubtype similarity), and genetic similarity has been determined to be a very important factor promoting recombination. Thus, the results revealed an unexpected disagreement between intersubtype similarity and breakpoint distribution, which were further confirmed by genetic similarity analysis. Our analysis reveals a critical conflict between results from natural HIV-1 isolates and those from HIV-1-based assay vectors in which genetic similarity has been shown to be a very critical factor promoting recombination. These results indicate the region with high-pairing probabilities may be a more fundamental factor affecting HIV-1 recombination than sequence similarity in natural HIV-1 infections. Our

  13. High-resolution non-destructive three-dimensional imaging of integrated circuits

    Science.gov (United States)

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H. R.; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-01

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography—a high-resolution coherent diffractive imaging technique—can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  14. Construction of high-dimensional neural network potentials using environment-dependent atom pairs.

    Science.gov (United States)

    Jose, K V Jovan; Artrith, Nongnuch; Behler, Jörg

    2012-05-21

    An accurate determination of the potential energy is the crucial step in computer simulations of chemical processes, but using electronic structure methods on-the-fly in molecular dynamics (MD) is computationally too demanding for many systems. Constructing more efficient interatomic potentials becomes intricate with increasing dimensionality of the potential-energy surface (PES), and for numerous systems the accuracy that can be achieved is still not satisfying and far from the reliability of first-principles calculations. Feed-forward neural networks (NNs) have a very flexible functional form, and in recent years they have been shown to be an accurate tool to construct efficient PESs. High-dimensional NN potentials based on environment-dependent atomic energy contributions have been presented for a number of materials. Still, these potentials may be improved by a more detailed structural description, e.g., in form of atom pairs, which directly reflect the atomic interactions and take the chemical environment into account. We present an implementation of an NN method based on atom pairs, and its accuracy and performance are compared to the atom-based NN approach using two very different systems, the methanol molecule and metallic copper. We find that both types of NN potentials provide an excellent description of both PESs, with the pair-based method yielding a slightly higher accuracy making it a competitive alternative for addressing complex systems in MD simulations.

  15. High-resolution non-destructive three-dimensional imaging of integrated circuits.

    Science.gov (United States)

    Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H R; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel

    2017-03-15

    Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography-a high-resolution coherent diffractive imaging technique-can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.

  16. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  17. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    Science.gov (United States)

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  18. High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision and hobbyist unmanned aerial vehicles

    Science.gov (United States)

    Dandois, J. P.; Ellis, E. C.

    2013-12-01

    High spatial resolution three-dimensional (3D) measurements of vegetation by remote sensing are advancing ecological research and environmental management. However, substantial economic and logistical costs limit this application, especially for observing phenological dynamics in ecosystem structure and spectral traits. Here we demonstrate a new aerial remote sensing system enabling routine and inexpensive aerial 3D measurements of canopy structure and spectral attributes, with properties similar to those of LIDAR, but with RGB (red-green-blue) spectral attributes for each point, enabling high frequency observations within a single growing season. This 'Ecosynth' methodology applies photogrammetric ''Structure from Motion'' computer vision algorithms to large sets of highly overlapping low altitude (USA. Ecosynth canopy height maps (CHMs) were strong predictors of field-measured tree heights (R2 0.63 to 0.84) and were highly correlated with a LIDAR CHM (R 0.87) acquired 4 days earlier, though Ecosynth-based estimates of aboveground biomass densities included significant errors (31 - 36% of field-based estimates). Repeated scanning of a 0.25 ha forested area at six different times across a 16 month period revealed ecologically significant dynamics in canopy color at different heights and a structural shift upward in canopy density, as demonstrated by changes in vertical height profiles of point density and relative RGB brightness. Changes in canopy relative greenness were highly correlated (R2 = 0.88) with MODIS NDVI time series for the same area and vertical differences in canopy color revealed the early green up of the dominant canopy species, Liriodendron tulipifera, strong evidence that Ecosynth time series measurements capture vegetation structural and spectral dynamics at the spatial scale of individual trees. Observing canopy phenology in 3D at high temporal resolutions represents a breakthrough in forest ecology. Inexpensive user-deployed technologies for

  19. Similarity Measure of Graphs

    Directory of Open Access Journals (Sweden)

    Amine Labriji

    2017-07-01

    Full Text Available The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM, or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.

  20. High similarity of Trypanosoma cruzi kDNA genetic profiles detected by LSSP-PCR within family groups in an endemic area of Chagas disease in Brazil

    Directory of Open Access Journals (Sweden)

    Sandra Maria Alkmim-Oliveira

    2014-10-01

    Full Text Available Introduction Determining the genetic similarities among Trypanosoma cruzi populations isolated from different hosts and vectors is very important to clarify the epidemiology of Chagas disease. Methods An epidemiological study was conducted in a Brazilian endemic area for Chagas disease, including 76 chronic chagasic individuals (96.1% with an indeterminate form; 46.1% with positive hemoculture. Results T. cruzi I (TcI was isolated from one child and TcII was found in the remaining (97.1% subjects. Low-stringency single-specific-primer-polymerase chain reaction (LSSP-PCR showed high heterogeneity among TcII populations (46% of shared bands; however, high similarities (80-100% among pairs of mothers/children, siblings, or cousins were detected. Conclusions LSSP-PCR showed potential for identifying similar parasite populations among individuals with close kinship in epidemiological studies of Chagas disease.

  1. Exploring high dimensional data with Butterfly: a novel classification algorithm based on discrete dynamical systems.

    Science.gov (United States)

    Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken

    2014-03-01

    We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer

  2. Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement.

    Science.gov (United States)

    Lin, Hui; Gao, Jian; Mei, Qing; He, Yunbo; Liu, Junxiu; Wang, Xingjin

    2016-04-04

    It is a challenge for any optical method to measure objects with a large range of reflectivity variation across the surface. Image saturation results in incorrect intensities in captured fringe pattern images, leading to phase and measurement errors. This paper presents a new adaptive digital fringe projection technique which avoids image saturation and has a high signal to noise ratio (SNR) in the three-dimensional (3-D) shape measurement of objects that has a large range of reflectivity variation across the surface. Compared to previous high dynamic range 3-D scan methods using many exposures and fringe pattern projections, which consumes a lot of time, the proposed technique uses only two preliminary steps of fringe pattern projection and image capture to generate the adapted fringe patterns, by adaptively adjusting the pixel-wise intensity of the projected fringe patterns based on the saturated pixels in the captured images of the surface being measured. For the bright regions due to high surface reflectivity and high illumination by the ambient light and surfaces interreflections, the projected intensity is reduced just to be low enough to avoid image saturation. Simultaneously, the maximum intensity of 255 is used for those dark regions with low surface reflectivity to maintain high SNR. Our experiments demonstrate that the proposed technique can achieve higher 3-D measurement accuracy across a surface with a large range of reflectivity variation.

  3. High thermoelectric power factor in two-dimensional crystals of Mo S2

    Science.gov (United States)

    Hippalgaonkar, Kedar; Wang, Ying; Ye, Yu; Qiu, Diana Y.; Zhu, Hanyu; Wang, Yuan; Moore, Joel; Louie, Steven G.; Zhang, Xiang

    2017-03-01

    The quest for high-efficiency heat-to-electricity conversion has been one of the major driving forces toward renewable energy production for the future. Efficient thermoelectric devices require high voltage generation from a temperature gradient and a large electrical conductivity while maintaining a low thermal conductivity. For a given thermal conductivity and temperature, the thermoelectric power factor is determined by the electronic structure of the material. Low dimensionality (1D and 2D) opens new routes to a high power factor due to the unique density of states (DOS) of confined electrons and holes. The 2D transition metal dichalcogenide (TMDC) semiconductors represent a new class of thermoelectric materials not only due to such confinement effects but especially due to their large effective masses and valley degeneracies. Here, we report a power factor of Mo S2 as large as 8.5 mW m-1K-2 at room temperature, which is among the highest measured in traditional, gapped thermoelectric materials. To obtain these high power factors, we perform thermoelectric measurements on few-layer Mo S2 in the metallic regime, which allows us to access the 2D DOS near the conduction band edge and exploit the effect of 2D confinement on electron scattering rates, resulting in a large Seebeck coefficient. The demonstrated high, electronically modulated power factor in 2D TMDCs holds promise for efficient thermoelectric energy conversion.

  4. Direct numerical simulation of a compressible boundary-layer flow past an isolated three-dimensional hump in a high-speed subsonic regime

    Science.gov (United States)

    De Grazia, D.; Moxey, D.; Sherwin, S. J.; Kravtsova, M. A.; Ruban, A. I.

    2018-02-01

    In this paper we study the boundary-layer separation produced in a high-speed subsonic boundary layer by a small wall roughness. Specifically, we present a direct numerical simulation (DNS) of a two-dimensional boundary-layer flow over a flat plate encountering a three-dimensional Gaussian-shaped hump. This work was motivated by the lack of DNS data of boundary-layer flows past roughness elements in a similar regime which is typical of civil aviation. The Mach and Reynolds numbers are chosen to be relevant for aeronautical applications when considering small imperfections at the leading edge of wings. We analyze different heights of the hump: The smaller heights result in a weakly nonlinear regime, while the larger result in a fully nonlinear regime with an increasing laminar separation bubble arising downstream of the roughness element and the formation of a pair of streamwise counterrotating vortices which appear to support themselves.

  5. High electro-catalytic activities of glucose oxidase embedded one-dimensional ZnO nanostructures

    International Nuclear Information System (INIS)

    Sarkar, Nirmal K; Bhattacharyya, Swapan K

    2013-01-01

    One-dimensional ZnO nanorods and nanowires are separately synthesized on Zn substrate by simple hydrothermal processes at low temperatures. Electro-catalytic responses of glucose oxidase/ZnO/Zn electrodes using these two synthesized nanostructures of ZnO are reported and compared with others available in literature. It is apparent the Michaelis–Menten constant, K M app , for the present ZnO nanowire, having a greater aspect ratio, is found to be the lowest when compared with others. This sensor shows lower oxidation peak potential with a long detection range of 6.6 μM–380 mM and the highest sensitivity of ∼35.1 μA cm −2 mM −1 , among the reported values in the literature. Enzyme catalytic efficiency and turnover numbers are also found to be remarkably high. (paper)

  6. Resolving molecular vibronic structure using high-sensitivity two-dimensional electronic spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Bizimana, Laurie A.; Brazard, Johanna; Carbery, William P.; Gellen, Tobias; Turner, Daniel B., E-mail: dturner@nyu.edu [Department of Chemistry, New York University, 100 Washington Square East, New York, New York 10003 (United States)

    2015-10-28

    Coherent multidimensional optical spectroscopy is an emerging technique for resolving structure and ultrafast dynamics of molecules, proteins, semiconductors, and other materials. A current challenge is the quality of kinetics that are examined as a function of waiting time. Inspired by noise-suppression methods of transient absorption, here we incorporate shot-by-shot acquisitions and balanced detection into coherent multidimensional optical spectroscopy. We demonstrate that implementing noise-suppression methods in two-dimensional electronic spectroscopy not only improves the quality of features in individual spectra but also increases the sensitivity to ultrafast time-dependent changes in the spectral features. Measurements on cresyl violet perchlorate are consistent with the vibronic pattern predicted by theoretical models of a highly displaced harmonic oscillator. The noise-suppression methods should benefit research into coherent electronic dynamics, and they can be adapted to multidimensional spectroscopies across the infrared and ultraviolet frequency ranges.

  7. Two dimensional code for modeling of high ione cyclotron harmonic fast wave heating and current drive

    International Nuclear Information System (INIS)

    Grekov, D.; Kasilov, S.; Kernbichler, W.

    2016-01-01

    A two dimensional numerical code for computation of the electromagnetic field of a fast magnetosonic wave in a tokamak at high harmonics of the ion cyclotron frequency has been developed. The code computes the finite difference solution of Maxwell equations for separate toroidal harmonics making use of the toroidal symmetry of tokamak plasmas. The proper boundary conditions are prescribed at the realistic tokamak vessel. The currents in the RF antenna are specified externally and then used in Ampere law. The main poloidal tokamak magnetic field and the ''kinetic'' part of the dielectric permeability tensor are treated iteratively. The code has been verified against known analytical solutions and first calculations of current drive in the spherical torus are presented.

  8. Three dimensional imaging of damage in structural materials using high resolution micro-tomography

    Energy Technology Data Exchange (ETDEWEB)

    Buffiere, J.-Y. [GEMPPM UMR CNRS 5510, INSA Lyon, 20 Av. A. Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: jean-yves.buffiere@insa-lyon.fr; Proudhon, H. [GEMPPM UMR CNRS 5510, INSA Lyon, 20 Av. A. Einstein, 69621 Villeurbanne Cedex (France); Ferrie, E. [GEMPPM UMR CNRS 5510, INSA Lyon, 20 Av. A. Einstein, 69621 Villeurbanne Cedex (France); Ludwig, W. [GEMPPM UMR CNRS 5510, INSA Lyon, 20 Av. A. Einstein, 69621 Villeurbanne Cedex (France); Maire, E. [GEMPPM UMR CNRS 5510, INSA Lyon, 20 Av. A. Einstein, 69621 Villeurbanne Cedex (France); Cloetens, P. [ESRF Grenoble (France)

    2005-08-15

    This paper presents recent results showing the ability of high resolution synchrotron X-ray micro-tomography to image damage initiation and development during mechanical loading of structural metallic materials. First, the initiation, growth and coalescence of porosities in the bulk of two metal matrix composites have been imaged at different stages of a tensile test. Quantitative data on damage development has been obtained and related to the nature of the composite matrix. Second, three dimensional images of fatigue crack have been obtained in situ for two different Al alloys submitted to fretting and/or uniaxial in situ fatigue. The analysis of those images shows the strong interaction of the cracks with the local microstructure and provides unique experimental data for modelling the behaviour of such short cracks.

  9. Three dimensional imaging of damage in structural materials using high resolution micro-tomography

    International Nuclear Information System (INIS)

    Buffiere, J.-Y.; Proudhon, H.; Ferrie, E.; Ludwig, W.; Maire, E.; Cloetens, P.

    2005-01-01

    This paper presents recent results showing the ability of high resolution synchrotron X-ray micro-tomography to image damage initiation and development during mechanical loading of structural metallic materials. First, the initiation, growth and coalescence of porosities in the bulk of two metal matrix composites have been imaged at different stages of a tensile test. Quantitative data on damage development has been obtained and related to the nature of the composite matrix. Second, three dimensional images of fatigue crack have been obtained in situ for two different Al alloys submitted to fretting and/or uniaxial in situ fatigue. The analysis of those images shows the strong interaction of the cracks with the local microstructure and provides unique experimental data for modelling the behaviour of such short cracks

  10. Entanglement dynamics of high-dimensional bipartite field states inside the cavities in dissipative environments

    Energy Technology Data Exchange (ETDEWEB)

    Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail [Centre for Quantum Physics, COMSATS Institute of Information Technology, Islamabad (Pakistan); Bougouffa, Smail [Department of Physics, Faculty of Science, Taibah University, PO Box 30002, Madinah (Saudi Arabia)

    2010-02-14

    We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.

  11. Entanglement dynamics of high-dimensional bipartite field states inside the cavities in dissipative environments

    International Nuclear Information System (INIS)

    Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail; Bougouffa, Smail

    2010-01-01

    We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.

  12. Lithium decoration of three dimensional boron-doped graphene frameworks for high-capacity hydrogen storage

    International Nuclear Information System (INIS)

    Wang, Yunhui; Meng, Zhaoshun; Liu, Yuzhen; You, Dongsen; Wu, Kai; Lv, Jinchao; Wang, Xuezheng; Deng, Kaiming; Lu, Ruifeng; Rao, Dewei

    2015-01-01

    Based on density functional theory and the first principles molecular dynamics simulations, a three-dimensional B-doped graphene-interconnected framework has been constructed that shows good thermal stability even after metal loading. The average binding energy of adsorbed Li atoms on the proposed material (2.64 eV) is considerably larger than the cohesive energy per atom of bulk Li metal (1.60 eV). This value is ideal for atomically dispersed Li doping in experiments. From grand canonical Monte Carlo simulations, high hydrogen storage capacities of 5.9 wt% and 52.6 g/L in the Li-decorated material are attained at 298 K and 100 bars

  13. Time–energy high-dimensional one-side device-independent quantum key distribution

    International Nuclear Information System (INIS)

    Bao Hai-Ze; Bao Wan-Su; Wang Yang; Chen Rui-Ke; Ma Hong-Xin; Zhou Chun; Li Hong-Wei

    2017-01-01

    Compared with full device-independent quantum key distribution (DI-QKD), one-side device-independent QKD (1sDI-QKD) needs fewer requirements, which is much easier to meet. In this paper, by applying recently developed novel time–energy entropic uncertainty relations, we present a time–energy high-dimensional one-side device-independent quantum key distribution (HD-QKD) and provide the security proof against coherent attacks. Besides, we connect the security with the quantum steering. By numerical simulation, we obtain the secret key rate for Alice’s different detection efficiencies. The results show that our protocol can performance much better than the original 1sDI-QKD. Furthermore, we clarify the relation among the secret key rate, Alice’s detection efficiency, and the dispersion coefficient. Finally, we simply analyze its performance in the optical fiber channel. (paper)

  14. Propagation of Elastic Waves in a One-Dimensional High Aspect Ratio Nanoridge Phononic Crystal

    Directory of Open Access Journals (Sweden)

    Abdellatif Gueddida

    2018-05-01

    Full Text Available We investigate the propagation of elastic waves in a one-dimensional (1D phononic crystal constituted by high aspect ratio epoxy nanoridges that have been deposited at the surface of a glass substrate. With the help of the finite element method (FEM, we calculate the dispersion curves of the modes localized at the surface for propagation both parallel and perpendicular to the nanoridges. When the direction of the wave is parallel to the nanoridges, we find that the vibrational states coincide with the Lamb modes of an infinite plate that correspond to one nanoridge. When the direction of wave propagation is perpendicular to the 1D nanoridges, the localized modes inside the nanoridges give rise to flat branches in the band structure that interact with the surface Rayleigh mode, and possibly open narrow band gaps. Filling the nanoridge structure with a viscous liquid produces new modes that propagate along the 1D finite height multilayer array.

  15. A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....

  16. Advances in high-resolution imaging--techniques for three-dimensional imaging of cellular structures.

    Science.gov (United States)

    Lidke, Diane S; Lidke, Keith A

    2012-06-01

    A fundamental goal in biology is to determine how cellular organization is coupled to function. To achieve this goal, a better understanding of organelle composition and structure is needed. Although visualization of cellular organelles using fluorescence or electron microscopy (EM) has become a common tool for the cell biologist, recent advances are providing a clearer picture of the cell than ever before. In particular, advanced light-microscopy techniques are achieving resolutions below the diffraction limit and EM tomography provides high-resolution three-dimensional (3D) images of cellular structures. The ability to perform both fluorescence and electron microscopy on the same sample (correlative light and electron microscopy, CLEM) makes it possible to identify where a fluorescently labeled protein is located with respect to organelle structures visualized by EM. Here, we review the current state of the art in 3D biological imaging techniques with a focus on recent advances in electron microscopy and fluorescence super-resolution techniques.

  17. Inference for feature selection using the Lasso with high-dimensional data

    DEFF Research Database (Denmark)

    Brink-Jensen, Kasper; Ekstrøm, Claus Thorn

    2014-01-01

    Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...

  18. Efficient High-Dimensional Entanglement Imaging with a Compressive-Sensing Double-Pixel Camera

    Directory of Open Access Journals (Sweden)

    Gregory A. Howland

    2013-02-01

    Full Text Available We implement a double-pixel compressive-sensing camera to efficiently characterize, at high resolution, the spatially entangled fields that are produced by spontaneous parametric down-conversion. This technique leverages sparsity in spatial correlations between entangled photons to improve acquisition times over raster scanning by a scaling factor up to n^{2}/log⁡(n for n-dimensional images. We image at resolutions up to 1024 dimensions per detector and demonstrate a channel capacity of 8.4 bits per photon. By comparing the entangled photons’ classical mutual information in conjugate bases, we violate an entropic Einstein-Podolsky-Rosen separability criterion for all measured resolutions. More broadly, our result indicates that compressive sensing can be especially effective for higher-order measurements on correlated systems.

  19. LABAN-PEL: a two-dimensional, multigroup diffusion, high-order response matrix code

    International Nuclear Information System (INIS)

    Mueller, E.Z.

    1991-06-01

    The capabilities of LABAN-PEL is described. LABAN-PEL is a modified version of the two-dimensional, high-order response matrix code, LABAN, written by Lindahl. The new version extends the capabilities of the original code with regard to the treatment of neutron migration by including an option to utilize full group-to-group diffusion coefficient matrices. In addition, the code has been converted from single to double precision and the necessary routines added to activate its multigroup capability. The coding has also been converted to standard FORTRAN-77 to enhance the portability of the code. Details regarding the input data requirements and calculational options of LABAN-PEL are provided. 13 refs

  20. On mixed derivatives type high dimensional multi-term fractional partial differential equations approximate solutions

    Science.gov (United States)

    Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad

    2017-01-01

    In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.

  1. Three-dimensional Core Design of a Super Fast Reactor with a High Power Density

    International Nuclear Information System (INIS)

    Cao, Liangzhi; Oka, Yoshiaki; Ishiwatari, Yuki; Ikejiri, Satoshi; Ju, Haitao

    2010-01-01

    The SuperCritical Water-cooled Reactor (SCWR) pursues high power density to reduce its capital cost. The fast spectrum SCWR, called a super fast reactor, can be designed with a higher power density than thermal spectrum SCWR. The mechanism of increasing the average power density of the super fast reactor is studied theoretically and numerically. Some key parameters affecting the average power density, including fuel pin outer diameter, fuel pitch, power peaking factor, and the fraction of seed assemblies, are analyzed and optimized to achieve a more compact core. Based on those sensitivity analyses, a compact super fast reactor is successfully designed with an average power density of 294.8 W/cm 3 . The core characteristics are analyzed by using three-dimensional neutronics/thermal-hydraulics coupling method. Numerical results show that all of the design criteria and goals are satisfied

  2. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang; Tong, Tiejun; Genton, Marc G.

    2017-01-01

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  3. Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox

    Science.gov (United States)

    Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas

    In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.

  4. Highly mobile charge-transfer excitons in two-dimensional WS2/tetracene heterostructures

    Science.gov (United States)

    Zhu, Tong; Yuan, Long; Zhao, Yan; Zhou, Mingwei; Wan, Yan; Mei, Jianguo; Huang, Libai

    2018-01-01

    Charge-transfer (CT) excitons at heterointerfaces play a critical role in light to electricity conversion using organic and nanostructured materials. However, how CT excitons migrate at these interfaces is poorly understood. We investigate the formation and transport of CT excitons in two-dimensional WS2/tetracene van der Waals heterostructures. Electron and hole transfer occurs on the time scale of a few picoseconds, and emission of interlayer CT excitons with a binding energy of ~0.3 eV has been observed. Transport of the CT excitons is directly measured by transient absorption microscopy, revealing coexistence of delocalized and localized states. Trapping-detrapping dynamics between the delocalized and localized states leads to stretched-exponential photoluminescence decay with an average lifetime of ~2 ns. The delocalized CT excitons are remarkably mobile with a diffusion constant of ~1 cm2 s−1. These highly mobile CT excitons could have important implications in achieving efficient charge separation. PMID:29340303

  5. High mobility two-dimensional electron gases in nitride heterostructures with high Al composition AlGaN alloy barriers

    International Nuclear Information System (INIS)

    Li Guowang; Cao Yu; Xing Huili Grace; Jena, Debdeep

    2010-01-01

    We report high-electron mobility nitride heterostructures with >70% Al composition AlGaN alloy barriers grown by molecular beam epitaxy. Direct growth of such AlGaN layers on GaN resulted in hexagonal trenches and a low mobility polarization-induced charge. By applying growth interruption at the heterojunction, the surface morphology improved dramatically and the room temperature two-dimensional electron gas (2DEG) mobility increased by an order of magnitude, exceeding 1300 cm 2 /V s. The 2DEG density was tunable at 0.4-3.7x10 13 /cm 2 by varying the total barrier thickness (t). Surface barrier heights of the heterostructures were extracted and exhibited dependence on t.

  6. Chemically doped three-dimensional porous graphene monoliths for high-performance flexible field emitters.

    Science.gov (United States)

    Kim, Ho Young; Jeong, Sooyeon; Jeong, Seung Yol; Baeg, Kang-Jun; Han, Joong Tark; Jeong, Mun Seok; Lee, Geon-Woong; Jeong, Hee Jin

    2015-03-12

    Despite the recent progress in the fabrication of field emitters based on graphene nanosheets, their morphological and electrical properties, which affect their degree of field enhancement as well as the electron tunnelling barrier height, should be controlled to allow for better field-emission properties. Here we report a method that allows the synthesis of graphene-based emitters with a high field-enhancement factor and a low work function. The method involves forming monolithic three-dimensional (3D) graphene structures by freeze-drying of a highly concentrated graphene paste and subsequent work-function engineering by chemical doping. Graphene structures with vertically aligned edges were successfully fabricated by the freeze-drying process. Furthermore, their number density could be controlled by varying the composition of the graphene paste. Al- and Au-doped 3D graphene emitters were fabricated by introducing the corresponding dopant solutions into the graphene sheets. The resulting field-emission characteristics of the resulting emitters are discussed. The synthesized 3D graphene emitters were highly flexible, maintaining their field-emission properties even when bent at large angles. This is attributed to the high crystallinity and emitter density and good chemical stability of the 3D graphene emitters, as well as to the strong interactions between the 3D graphene emitters and the substrate.

  7. High-Current Gain Two-Dimensional MoS 2 -Base Hot-Electron Transistors

    KAUST Repository

    Torres, Carlos M.

    2015-12-09

    The vertical transport of nonequilibrium charge carriers through semiconductor heterostructures has led to milestones in electronics with the development of the hot-electron transistor. Recently, significant advances have been made with atomically sharp heterostructures implementing various two-dimensional materials. Although graphene-base hot-electron transistors show great promise for electronic switching at high frequencies, they are limited by their low current gain. Here we show that, by choosing MoS2 and HfO2 for the filter barrier interface and using a noncrystalline semiconductor such as ITO for the collector, we can achieve an unprecedentedly high-current gain (α ∼ 0.95) in our hot-electron transistors operating at room temperature. Furthermore, the current gain can be tuned over 2 orders of magnitude with the collector-base voltage albeit this feature currently presents a drawback in the transistor performance metrics such as poor output resistance and poor intrinsic voltage gain. We anticipate our transistors will pave the way toward the realization of novel flexible 2D material-based high-density, low-energy, and high-frequency hot-carrier electronic applications. © 2015 American Chemical Society.

  8. Preparing two-dimensional microporous carbon from Pistachio nutshell with high areal capacitance as supercapacitor materials

    Science.gov (United States)

    Xu, Jiandong; Gao, Qiuming; Zhang, Yunlu; Tan, Yanli; Tian, Weiqian; Zhu, Lihua; Jiang, Lei

    2014-07-01

    Two-dimensional (2D) porous carbon AC-SPN-3 possessing of amazing high micropore volume ratio of 83% and large surface area of about 1069 m2 g-1 is high-yield obtained by pyrolysis of natural waste Pistachio nutshells with KOH activation. The AC-SPN-3 has a curved 2D lamellar morphology with the thickness of each slice about 200 nm. The porous carbon is consists of highly interconnected uniform pores with the median pore diameter of about 0.76 nm, which could potentially improve the performance by maximizing the electrode surface area accessible to the typical electrolyte ions (such as TEA+, diameter = ~0.68 nm). Electrochemical analyses show that AC-SPN-3 has significantly large areal capacitance of 29.3/20.1 μF cm-2 and high energy density of 10/39 Wh kg-1 at power of 52/286 kW kg-1 in 6 M KOH aqueous electrolyte and 1 M TEABF4 in EC-DEC (1:1) organic electrolyte system, respectively.

  9. High-Current Gain Two-Dimensional MoS 2 -Base Hot-Electron Transistors

    KAUST Repository

    Torres, Carlos M.; Lan, Yann Wen; Zeng, Caifu; Chen, Jyun Hong; Kou, Xufeng; Navabi, Aryan; Tang, Jianshi; Montazeri, Mohammad; Adleman, James R.; Lerner, Mitchell B.; Zhong, Yuan Liang; Li, Lain-Jong; Chen, Chii Dong; Wang, Kang L.

    2015-01-01

    The vertical transport of nonequilibrium charge carriers through semiconductor heterostructures has led to milestones in electronics with the development of the hot-electron transistor. Recently, significant advances have been made with atomically sharp heterostructures implementing various two-dimensional materials. Although graphene-base hot-electron transistors show great promise for electronic switching at high frequencies, they are limited by their low current gain. Here we show that, by choosing MoS2 and HfO2 for the filter barrier interface and using a noncrystalline semiconductor such as ITO for the collector, we can achieve an unprecedentedly high-current gain (α ∼ 0.95) in our hot-electron transistors operating at room temperature. Furthermore, the current gain can be tuned over 2 orders of magnitude with the collector-base voltage albeit this feature currently presents a drawback in the transistor performance metrics such as poor output resistance and poor intrinsic voltage gain. We anticipate our transistors will pave the way toward the realization of novel flexible 2D material-based high-density, low-energy, and high-frequency hot-carrier electronic applications. © 2015 American Chemical Society.

  10. Woods with physical, mechanical and acoustic properties similar to those of Caesalpinia echinata have high potential as alternative woods for bow makers

    Directory of Open Access Journals (Sweden)

    Eduardo Luiz Longui

    2014-09-01

    Full Text Available For nearly two hundred years, Caesalpinia echinata wood has been the standard for modern bows. However, the threat of extinction and the enforcement of trade bans have required bow makers to seek alternative woods. The hypothesis tested was that woods with physical, mechanical and acoustic properties similar to those of C. echinata would have high potential as alternative woods for bows. Accordingly, were investigated Handroanthus spp., Mezilaurus itauba, Hymenaea spp., Dipteryx spp., Diplotropis spp. and Astronium lecointei. Handroanthus and Diplotropis have the greatest number of similarities with C. echinata, but only Handroanthus spp. showed significant results in actual bow manufacture, suggesting the importance of such key properties as specific gravity, speed of sound propagation and modulus of elasticity. In practice, Handroanthus and Dipteryx produced bows of quality similar to that of C. echinata.

  11. Local imaging of high mobility two-dimensional electron systems with virtual scanning tunneling microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Pelliccione, M. [Department of Applied Physics, Stanford University, 348 Via Pueblo Mall, Stanford, California 94305 (United States); Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); Department of Physics, University of California, Santa Barbara, Santa Barbara, California 93106 (United States); Bartel, J.; Goldhaber-Gordon, D. [Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); Department of Physics, Stanford University, 382 Via Pueblo Mall, Stanford, California 94305 (United States); Sciambi, A. [Department of Applied Physics, Stanford University, 348 Via Pueblo Mall, Stanford, California 94305 (United States); Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, California 94025 (United States); Pfeiffer, L. N.; West, K. W. [Department of Electrical Engineering, Princeton University, Princeton, New Jersey 08544 (United States)

    2014-11-03

    Correlated electron states in high mobility two-dimensional electron systems (2DESs), including charge density waves and microemulsion phases intermediate between a Fermi liquid and Wigner crystal, are predicted to exhibit complex local charge order. Existing experimental studies, however, have mainly probed these systems at micron to millimeter scales rather than directly mapping spatial organization. Scanning probes should be well-suited to study the spatial structure of these states, but high mobility 2DESs are found at buried semiconductor interfaces, beyond the reach of conventional scanning tunneling microscopy. Scanning techniques based on electrostatic coupling to the 2DES deliver important insights, but generally with resolution limited by the depth of the 2DES. In this letter, we present our progress in developing a technique called “virtual scanning tunneling microscopy” that allows local tunneling into a high mobility 2DES. Using a specially designed bilayer GaAs/AlGaAs heterostructure where the tunnel coupling between two separate 2DESs is tunable via electrostatic gating, combined with a scanning gate, we show that the local tunneling can be controlled with sub-250 nm resolution.

  12. Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.

    Science.gov (United States)

    Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack

    2017-06-01

    In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.

  13. A three-dimensional graphene aerogel containing solvent-free polyaniline fluid for high performance supercapacitors.

    Science.gov (United States)

    Gao, Zhaodongfang; Yang, Junwei; Huang, Jing; Xiong, Chuanxi; Yang, Quanling

    2017-11-23

    Conducting polymer based supercapacitors usually suffer from the difficulty of achieving high specific capacitance and good long-term stability simultaneously. In this communication, a long-chain protonic acid doped solvent-free self-suspended polyaniline (S-PANI) fluid and reduced graphene oxide (RGO) were used to fabricate a three-dimensional RGO/S-PANI aerogel via a simple self-assembled hydrothermal method, which was then applied as a supercapacitor electrode. This 3D RGO/S-PANI composite exhibited a high specific capacitance of up to 480 F g -1 at a current density of 1 A g -1 and 334 F g -1 even at a high discharge rate of 40 A g -1 . An outstanding cycling performance, with 96.14% of the initial capacitance remaining after 10 000 charging/discharging cycles at a rate of 10 A g -1 , was also achieved. Compared with the conventional conducting polymer materials, the 3D RGO/S-PANI composite presented more reliable rate capability and cycling stability. Moreover, S-PANI possesses excellent processability, thereby revealing its enormous potential in large scale production. We anticipate that the solvent-free fluid technique is also applicable to the preparation of other 3D graphene/polymer materials for energy storage.

  14. A Fast and High-precision Orientation Algorithm for BeiDou Based on Dimensionality Reduction

    Directory of Open Access Journals (Sweden)

    ZHAO Jiaojiao

    2015-05-01

    Full Text Available A fast and high-precision orientation algorithm for BeiDou is proposed by deeply analyzing the constellation characteristics of BeiDou and GEO satellites features.With the advantage of good east-west geometry, the baseline vector candidate values were solved by the GEO satellites observations combined with the dimensionality reduction theory at first.Then, we use the ambiguity function to judge the values in order to obtain the optical baseline vector and get the wide lane integer ambiguities. On this basis, the B1 ambiguities were solved. Finally, the high-precision orientation was estimated by the determinating B1 ambiguities. This new algorithm not only can improve the ill-condition of traditional algorithm, but also can reduce the ambiguity search region to a great extent, thus calculating the integer ambiguities in a single-epoch.The algorithm is simulated by the actual BeiDou ephemeris and the result shows that the method is efficient and fast for orientation. It is capable of very high single-epoch success rate(99.31% and accurate attitude angle (the standard deviation of pitch and heading is respectively 0.07°and 0.13°in a real time and dynamic environment.

  15. Digested sludge-derived three-dimensional hierarchical porous carbon for high-performance supercapacitor electrode.

    Science.gov (United States)

    Zhang, Jia-Jia; Fan, Hao-Xiang; Dai, Xiao-Hu; Yuan, Shi-Jie

    2018-04-01

    Digested sludge, as the main by-product of the sewage sludge anaerobic digestion process, still contains considerable organic compounds. In this protocol, we report a facile method for preparing digested sludge-derived self-doped porous carbon material for high-performance supercapacitor electrodes via a sustainable pyrolysis/activation process. The obtained digested sludge-derived carbon material (HPDSC) exhibits versatile O-, N-doped hierarchical porous framework, high specific surface area (2103.6 m 2  g -1 ) and partial graphitization phase, which can facilitate ion transport, provide more storage sites for electrolyte ions and enhance the conductivity of active electrode materials. The HPDSC-based supercapacitor electrodes show favourable energy storage performance, with a specific capacitance of 245 F g -1 at 1.0 A g -1 in 0.5 M Na 2 SO 4 ; outstanding cycling stability, with 98.4% capacitance retention after 2000 cycles; and good rate performance (211 F g -1 at 11 A g -1 ). This work provides a unique self-doped three-dimensional hierarchical porous carbon material with a favourable charge storage capacity and at the same time finds a high value-added and environment-friendly strategy for disposal and recycling of digested sludge.

  16. Three-dimensional N-doped graphene/polyaniline composite foam for high performance supercapacitors

    Science.gov (United States)

    Zhu, Jun; Kong, Lirong; Shen, Xiaoping; Chen, Quanrun; Ji, Zhenyuan; Wang, Jiheng; Xu, Keqiang; Zhu, Guoxing

    2018-01-01

    Three-dimensional (3D) graphene aerogel and its composite with interconnected pores have aroused continuous interests in energy storage field owning to its large surface area and hierarchical pore structure. Herein, we reported the preparation of 3D nitrogen-doped graphene/polyaniline (N-GE/PANI) composite foam for supercapacitive material with greatly improved electrochemical performance. The 3D porous structure can allow the penetration and diffusion of electrolyte, the incorporation of nitrogen doping can enhance the wettability of the active material and the number of active sites with electrolyte, and both the N-GE and PANI can ensure the high electrical conductivity of total electrode. Moreover, the synergistic effect between N-GE and PANI materials also play an important role on the electrochemical performance of electrode. Therefore, the as-prepared composite foam could deliver a high specific capacitance of 528 F g-1 at 0.1 A g-1 and a high cyclic stability with 95.9% capacitance retention after 5000 charge-discharge cycles. This study provides a new idea on improving the energy storage capacity of supercapacitors by using 3D graphene-based psedocapacitive electrode materials.

  17. Digested sludge-derived three-dimensional hierarchical porous carbon for high-performance supercapacitor electrode

    Science.gov (United States)

    Zhang, Jia-Jia; Fan, Hao-Xiang; Dai, Xiao-Hu; Yuan, Shi-Jie

    2018-04-01

    Digested sludge, as the main by-product of the sewage sludge anaerobic digestion process, still contains considerable organic compounds. In this protocol, we report a facile method for preparing digested sludge-derived self-doped porous carbon material for high-performance supercapacitor electrodes via a sustainable pyrolysis/activation process. The obtained digested sludge-derived carbon material (HPDSC) exhibits versatile O-, N-doped hierarchical porous framework, high specific surface area (2103.6 m2 g-1) and partial graphitization phase, which can facilitate ion transport, provide more storage sites for electrolyte ions and enhance the conductivity of active electrode materials. The HPDSC-based supercapacitor electrodes show favourable energy storage performance, with a specific capacitance of 245 F g-1 at 1.0 A g-1 in 0.5 M Na2SO4; outstanding cycling stability, with 98.4% capacitance retention after 2000 cycles; and good rate performance (211 F g-1 at 11 A g-1). This work provides a unique self-doped three-dimensional hierarchical porous carbon material with a favourable charge storage capacity and at the same time finds a high value-added and environment-friendly strategy for disposal and recycling of digested sludge.

  18. High-accuracy optical extensometer based on coordinate transform in two-dimensional digital image correlation

    Science.gov (United States)

    Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan

    2018-01-01

    In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.

  19. Multistack integration of three-dimensional hyperbranched anatase titania architectures for high-efficiency dye-sensitized solar cells.

    Science.gov (United States)

    Wu, Wu-Qiang; Xu, Yang-Fan; Rao, Hua-Shang; Su, Cheng-Yong; Kuang, Dai-Bin

    2014-04-30

    An unprecedented attempt was conducted on suitably functionalized integration of three-dimensional hyperbranched titania architectures for efficient multistack photoanode, constructed via layer-by-layer assembly of hyperbranched hierarchical tree-like titania nanowires (underlayer), branched hierarchical rambutan-like titania hollow submicrometer-sized spheres (intermediate layer), and hyperbranched hierarchical urchin-like titania micrometer-sized spheres (top layer). Owing to favorable charge-collection, superior light harvesting efficiency and extended electron lifetime, the multilayered TiO2-based devices showed greater J(sc) and V(oc) than those of a conventional TiO2 nanoparticle (TNP), and an overall power conversion efficiency of 11.01% (J(sc) = 18.53 mA cm(-2); V(oc) = 827 mV and FF = 0.72) was attained, which remarkably outperformed that of a TNP-based reference cell (η = 7.62%) with a similar film thickness. Meanwhile, the facile and operable film-fabricating technique (hydrothermal and drop-casting) provides a promising scheme and great simplicity for high performance/cost ratio photovoltaic device processability in a sustainable way.

  20. Low-storage implicit/explicit Runge-Kutta schemes for the simulation of stiff high-dimensional ODE systems

    Science.gov (United States)

    Cavaglieri, Daniele; Bewley, Thomas

    2015-04-01

    Implicit/explicit (IMEX) Runge-Kutta (RK) schemes are effective for time-marching ODE systems with both stiff and nonstiff terms on the RHS; such schemes implement an (often A-stable or better) implicit RK scheme for the stiff part of the ODE, which is often linear, and, simultaneously, a (more convenient) explicit RK scheme for the nonstiff part of the ODE, which is often nonlinear. Low-storage RK schemes are especially effective for time-marching high-dimensional ODE discretizations of PDE systems on modern (cache-based) computational hardware, in which memory management is often the most significant computational bottleneck. In this paper, we develop and characterize eight new low-storage implicit/explicit RK schemes which have higher accuracy and better stability properties than the only low-storage implicit/explicit RK scheme available previously, the venerable second-order Crank-Nicolson/Runge-Kutta-Wray (CN/RKW3) algorithm that has dominated the DNS/LES literature for the last 25 years, while requiring similar storage (two, three, or four registers of length N) and comparable floating-point operations per timestep.

  1. Three-dimensional carbon nanotube networks with a supported nickel oxide nanonet for high-performance supercapacitors.

    Science.gov (United States)

    Wu, Mao-Sung; Zheng, Yo-Ru; Lin, Guan-Wei

    2014-08-04

    A three-dimensional porous carbon nanotube film with a supported NiO nanonet was prepared by simple electrophoretic deposition and hydrothermal synthesis, which could deliver a high specific capacitance of 1511 F g(-1) at a high discharge current of 50 A g(-1) due to the significantly improved transport of the electrolyte and electrons.

  2. Renewing the Respect for Similarity

    Directory of Open Access Journals (Sweden)

    Shimon eEdelman

    2012-07-01

    Full Text Available In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemmingfrom its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problemat hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, bysurveying established results and new developments in the theory and methods of similarity-preservingassociative lookup and dimensionality reduction — critical components of many cognitive functions, aswell as of intelligent data management in computer vision. We focus in particular on the growing familyof algorithms that support associative memory by performing hashing that respects local similarity, andon the uses of similarity in representing structured objects and scenes. Insofar as these similarity-basedideas and methods are useful in cognitive modeling and in AI applications, they should be included inthe core conceptual toolkit of computational neuroscience.

  3. [Extraction of buildings three-dimensional information from high-resolution satellite imagery based on Barista software].

    Science.gov (United States)

    Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi

    2010-05-01

    The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.

  4. Spherical anharmonic oscillator in self-similar approximation

    International Nuclear Information System (INIS)

    Yukalova, E.P.; Yukalov, V.I.

    1992-01-01

    The method of self-similar approximation is applied here for calculating the eigenvalues of the three-dimensional spherical anharmonic oscillator. The advantage of this method is in its simplicity and high accuracy. The comparison with other known analytical methods proves that this method is more simple and accurate. 25 refs

  5. Similarity transformed equation of motion coupled-cluster theory based on an unrestricted Hartree-Fock reference for applications to high-spin open-shell systems.

    Science.gov (United States)

    Huntington, Lee M J; Krupička, Martin; Neese, Frank; Izsák, Róbert

    2017-11-07

    The similarity transformed equation of motion coupled-cluster approach is extended for applications to high-spin open-shell systems, within the unrestricted Hartree-Fock (UHF) formalism. An automatic active space selection scheme has also been implemented such that calculations can be performed in a black-box fashion. It is observed that both the canonical and automatic active space selecting similarity transformed equation of motion (STEOM) approaches perform about as well as the more expensive equation of motion coupled-cluster singles doubles (EOM-CCSD) method for the calculation of the excitation energies of doublet radicals. The automatic active space selecting UHF STEOM approach can therefore be employed as a viable, lower scaling alternative to UHF EOM-CCSD for the calculation of excited states in high-spin open-shell systems.

  6. Similarity transformed equation of motion coupled-cluster theory based on an unrestricted Hartree-Fock reference for applications to high-spin open-shell systems

    Science.gov (United States)

    Huntington, Lee M. J.; Krupička, Martin; Neese, Frank; Izsák, Róbert

    2017-11-01

    The similarity transformed equation of motion coupled-cluster approach is extended for applications to high-spin open-shell systems, within the unrestricted Hartree-Fock (UHF) formalism. An automatic active space selection scheme has also been implemented such that calculations can be performed in a black-box fashion. It is observed that both the canonical and automatic active space selecting similarity transformed equation of motion (STEOM) approaches perform about as well as the more expensive equation of motion coupled-cluster singles doubles (EOM-CCSD) method for the calculation of the excitation energies of doublet radicals. The automatic active space selecting UHF STEOM approach can therefore be employed as a viable, lower scaling alternative to UHF EOM-CCSD for the calculation of excited states in high-spin open-shell systems.

  7. Miniature robust five-dimensional fingertip force/torque sensor with high performance

    International Nuclear Information System (INIS)

    Liang, Qiaokang; Huang, Xiuxiang; Li, Zhongyang; Zhang, Dan; Ge, Yunjian

    2011-01-01

    This paper proposes an innovative design and investigation for a five-dimensional fingertip force/torque sensor with a dual annular diaphragm. This sensor can be applied to a robot hand to measure forces along the X-, Y- and Z-axes (F x , F y and F z ) and moments about the X- and Y-axes (M x and M y ) simultaneously. Particularly, the details of the sensing principle, the structural design and the overload protection mechanism are presented. Afterward, based on the design of experiments approach provided by the software ANSYS®, a finite element analysis and an optimization design are performed. These are performed with the objective of achieving both high sensitivity and stiffness of the sensor. Furthermore, static and dynamic calibrations based on the neural network method are carried out. Finally, an application of the developed sensor on a dexterous robot hand is demonstrated. The results of calibration experiments and the application show that the developed sensor possesses high performance and robustness

  8. A high throughput mass spectrometry screening analysis based on two-dimensional carbon microfiber fractionation system.

    Science.gov (United States)

    Ma, Biao; Zou, Yilin; Xie, Xuan; Zhao, Jinhua; Piao, Xiangfan; Piao, Jingyi; Yao, Zhongping; Quinto, Maurizio; Wang, Gang; Li, Donghao

    2017-06-09

    A novel high-throughput, solvent saving and versatile integrated two-dimensional microscale carbon fiber/active carbon fiber system (2DμCFs) that allows a simply and rapid separation of compounds in low-polar, medium-polar and high-polar fractions, has been coupled with ambient ionization-mass spectrometry (ESI-Q-TOF-MS and ESI-QqQ-MS) for screening and quantitative analyses of real samples. 2DμCFs led to a substantial interference reduction and minimization of ionization suppression effects, thus increasing the sensitivity and the screening capabilities of the subsequent MS analysis. The method has been applied to the analysis of Schisandra Chinensis extracts, obtaining with a single injection a simultaneous determination of 33 compounds presenting different polarities, such as organic acids, lignans, and flavonoids in less than 7min, at low pressures and using small solvent amounts. The method was also validated using 10 model compounds, giving limit of detections (LODs) ranging from 0.3 to 30ngmL -1 , satisfactory recoveries (from 75.8 to 93.2%) and reproducibilities (relative standard deviations, RSDs, from 1.40 to 8.06%). Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Three-dimensional porous MXene/layered double hydroxide composite for high performance supercapacitors

    Science.gov (United States)

    Wang, Ya; Dou, Hui; Wang, Jie; Ding, Bing; Xu, Yunling; Chang, Zhi; Hao, Xiaodong

    2016-09-01

    In this work, an exfoliated MXene (e-MXene) nanosheets/nickel-aluminum layered double hydroxide (MXene/LDH) composite as supercapacitor electrode material is fabricated by in situ growth of LDH on e-MXene substrate. The LDH platelets homogeneously grown on the surface of the e-MXene sheets construct a three-dimensional (3D) porous structure, which not only leads to high active sites exposure of LDH and facile liquid electrolyte penetration, but also alleviates the volume change of LDH during the charge/discharge process. Meanwhile, the e -MXene substrate forms a conductive network to facilitate the electron transport of active material. The optimized MXene/LDH composite exhibits a high specific capacitance of 1061 F g-1 at a current density of 1 A g-1, excellent capacitance retention of 70% after 4000 cycle tests at a current density of 4 A g-1 and a good rate capability with 556 F g-1 retention at 10 A g-1.

  10. Doping of two-dimensional MoS2 by high energy ion implantation

    Science.gov (United States)

    Xu, Kang; Zhao, Yuda; Lin, Ziyuan; Long, Yan; Wang, Yi; Chan, Mansun; Chai, Yang

    2017-12-01

    Two-dimensional (2D) materials have been demonstrated to be promising candidates for next generation electronic circuits. Analogues to conventional Si-based semiconductors, p- and n-doping of 2D materials are essential for building complementary circuits. Controllable and effective doping strategies require large tunability of the doping level and negligible structural damage to ultrathin 2D materials. In this work, we demonstrate a doping method utilizing a conventional high-energy ion-implantation machine. Before the implantation, a Polymethylmethacrylate (PMMA) protective layer is used to decelerate the dopant ions and minimize the structural damage to MoS2, thus aggregating the dopants inside MoS2 flakes. By optimizing the implantation energy and fluence, phosphorus dopants are incorporated into MoS2 flakes. Our Raman and high-resolution transmission electron microscopy (HRTEM) results show that only negligibly structural damage is introduced to the MoS2 lattice during the implantation. P-doping effect by the incorporation of p+ is demonstrated by Photoluminescence (PL) and electrical characterizations. Thin PMMA protection layer leads to large kinetic damage but also a more significant doping effect. Also, MoS2 with large thickness shows less kinetic damage. This doping method makes use of existing infrastructures in the semiconductor industry and can be extended to other 2D materials and dopant species as well.

  11. Computational Strategies for Dissecting the High-Dimensional Complexity of Adaptive Immune Repertoires

    Directory of Open Access Journals (Sweden)

    Enkelejda Miho

    2018-02-01

    Full Text Available The adaptive immune system recognizes antigens via an immense array of antigen-binding antibodies and T-cell receptors, the immune repertoire. The interrogation of immune repertoires is of high relevance for understanding the adaptive immune response in disease and infection (e.g., autoimmunity, cancer, HIV. Adaptive immune receptor repertoire sequencing (AIRR-seq has driven the quantitative and molecular-level profiling of immune repertoires, thereby revealing the high-dimensional complexity of the immune receptor sequence landscape. Several methods for the computational and statistical analysis of large-scale AIRR-seq data have been developed to resolve immune repertoire complexity and to understand the dynamics of adaptive immunity. Here, we review the current research on (i diversity, (ii clustering and network, (iii phylogenetic, and (iv machine learning methods applied to dissect, quantify, and compare the architecture, evolution, and specificity of immune repertoires. We summarize outstanding questions in computational immunology and propose future directions for systems immunology toward coupling AIRR-seq with the computational discovery of immunotherapeutics, vaccines, and immunodiagnostics.

  12. Direct fabrication of high-resolution three-dimensional polymeric scaffolds using electrohydrodynamic hot jet plotting

    International Nuclear Information System (INIS)

    Wei, Chuang; Dong, Jingyan

    2013-01-01

    This paper presents the direct three-dimensional (3D) fabrication of polymer scaffolds with sub-10 µm structures using electrohydrodynamic jet (EHD-jet) plotting of melted thermoplastic polymers. Traditional extrusion-based fabrication approaches of 3D periodic porous structures are very limited in their resolution, due to the excessive pressure requirement for extruding highly viscous thermoplastic polymers. EHD-jet printing has become a high-resolution alternative to other forms of nozzle deposition-based fabrication approaches by generating micro-scale liquid droplets or a fine jet through the application of a large electrical voltage between the nozzle and the substrate. In this study, we successfully apply EHD-jet plotting technology with melted biodegradable polymer (polycaprolactone, or PCL) for the fabrication of 2D patterns and 3D periodic porous scaffold structures in potential tissue engineering applications. Process conditions (e.g. electrical voltage, pressure, plotting speed) have been thoroughly investigated to achieve reliable jet printing of fine filaments. We have demonstrated for the first time that the EHD-jet plotting process is capable of the fabrication of 3D periodic structures with sub-10 µm resolution, which has great potential in advanced biomedical applications, such as cell alignment and guidance. (paper)

  13. Analysis of oxidised heavy paraffininc products by high temperature comprehensive two-dimensional gas chromatography.

    Science.gov (United States)

    Potgieter, H; Bekker, R; Beigley, J; Rohwer, E

    2017-08-04

    Heavy petroleum fractions are produced during crude and synthetic crude oil refining processes and they need to be upgraded to useable products to increase their market value. Usually these fractions are upgraded to fuel products by hydrocracking, hydroisomerization and hydrogenation processes. These fractions are also upgraded to other high value commercial products like lubricant oils and waxes by distillation, hydrogenation, and oxidation and/or blending. Oxidation of hydrogenated heavy paraffinic fractions produces high value products that contain a variety of oxygenates and the characterization of these heavy oxygenates is very important for the control of oxidation processes. Traditionally titrimetric procedures are used to monitor oxygenate formation, however, these titrimetric procedures are tedious and lack selectivity toward specific oxygenate classes in complex matrices. Comprehensive two-dimensional gas chromatography (GC×GC) is a way of increasing peak capacity for the comprehensive analysis of complex samples. Other groups have used HT-GC×GC to extend the carbon number range attainable by GC×GC and have optimised HT-GC×GC parameters for the separation of aromatics, nitrogen-containing compounds as well as sulphur-containing compounds in heavy petroleum fractions. HT-GC×GC column combinations for the separation of oxygenates in oxidised heavy paraffinic fractions are optimised in this study. The advantages of the HT-GC×GC method in the monitoring of the oxidation reactions of heavy paraffinic fraction samples are illustrated. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Time-efficient, high-resolution, whole brain three-dimensional macromolecular proton fraction mapping.

    Science.gov (United States)

    Yarnykh, Vasily L

    2016-05-01

    Macromolecular proton fraction (MPF) mapping is a quantitative MRI method that reconstructs parametric maps of a relative amount of macromolecular protons causing the magnetization transfer (MT) effect and provides a biomarker of myelination in neural tissues. This study aimed to develop a high-resolution whole brain MPF mapping technique using a minimal number of source images for scan time reduction. The described technique was based on replacement of an actually acquired reference image without MT saturation by a synthetic one reconstructed from R1 and proton density maps, thus requiring only three source images. This approach enabled whole brain three-dimensional MPF mapping with isotropic 1.25 × 1.25 × 1.25 mm(3) voxel size and a scan time of 20 min. The synthetic reference method was validated against standard MPF mapping with acquired reference images based on data from eight healthy subjects. Mean MPF values in segmented white and gray matter appeared in close agreement with no significant bias and small within-subject coefficients of variation (maps demonstrated sharp white-gray matter contrast and clear visualization of anatomical details, including gray matter structures with high iron content. The proposed synthetic reference method improves resolution of MPF mapping and combines accurate MPF measurements with unique neuroanatomical contrast features. © 2015 Wiley Periodicals, Inc.

  15. Mesoporous Three-Dimensional Graphene Networks for Highly Efficient Solar Desalination under 1 sun Illumination.

    Science.gov (United States)

    Kim, Kwanghyun; Yu, Sunyoung; An, Cheolwon; Kim, Sung-Wook; Jang, Ji-Hyun

    2018-05-09

    Solar desalination via thermal evaporation of seawater is one of the most promising technologies for addressing the serious problem of global water scarcity because it does not require additional supporting energy other than infinite solar energy for generating clean water. However, low efficiency and a large amount of heat loss are considered critical limitations of solar desalination technology. The combination of mesoporous three-dimensional graphene networks (3DGNs) with a high solar absorption property and water-transporting wood pieces with a thermal insulation property has exhibited greatly enhanced solar-to-vapor conversion efficiency. 3DGN deposited on a wood piece provides an outstanding value of solar-to-vapor conversion efficiency, about 91.8%, under 1 sun illumination and excellent desalination efficiency of 5 orders salinity decrement. The mass-producible 3DGN enriched with many mesopores efficiently releases the vapors from an enormous area of the surface by heat localization on the top surface of the wood piece. Because the efficient solar desalination device made by 3DGN on the wood piece is highly scalable and inexpensive, it could serve as one of the main sources for the worldwide supply of purified water achieved via earth-abundant materials without an extra supporting energy source.

  16. Extending the Generalised Pareto Distribution for Novelty Detection in High-Dimensional Spaces.

    Science.gov (United States)

    Clifton, David A; Clifton, Lei; Hugueny, Samuel; Tarassenko, Lionel

    2014-01-01

    Novelty detection involves the construction of a "model of normality", and then classifies test data as being either "normal" or "abnormal" with respect to that model. For this reason, it is often termed one-class classification. The approach is suitable for cases in which examples of "normal" behaviour are commonly available, but in which cases of "abnormal" data are comparatively rare. When performing novelty detection, we are typically most interested in the tails of the normal model, because it is in these tails that a decision boundary between "normal" and "abnormal" areas of data space usually lies. Extreme value statistics provides an appropriate theoretical framework for modelling the tails of univariate (or low-dimensional) distributions, using the generalised Pareto distribution (GPD), which can be demonstrated to be the limiting distribution for data occurring within the tails of most practically-encountered probability distributions. This paper provides an extension of the GPD, allowing the modelling of probability distributions of arbitrarily high dimension, such as occurs when using complex, multimodel, multivariate distributions for performing novelty detection in most real-life cases. We demonstrate our extension to the GPD using examples from patient physiological monitoring, in which we have acquired data from hospital patients in large clinical studies of high-acuity wards, and in which we wish to determine "abnormal" patient data, such that early warning of patient physiological deterioration may be provided.

  17. Sideband instability analysis based on a one-dimensional high-gain free electron laser model

    Science.gov (United States)

    Tsai, Cheng-Ying; Wu, Juhao; Yang, Chuan; Yoon, Moohyun; Zhou, Guanqun

    2017-12-01

    When an untapered high-gain free electron laser (FEL) reaches saturation, the exponential growth ceases and the radiation power starts to oscillate about an equilibrium. The FEL radiation power or efficiency can be increased by undulator tapering. For a high-gain tapered FEL, although the power is enhanced after the first saturation, it is known that there is a so-called second saturation where the FEL power growth stops even with a tapered undulator system. The sideband instability is one of the primary reasons leading to this second saturation. In this paper, we provide a quantitative analysis on how the gradient of undulator tapering can mitigate the sideband growth. The study is carried out semianalytically and compared with one-dimensional numerical simulations. The physical parameters are taken from Linac Coherent Light Source-like electron bunch and undulator systems. The sideband field gain and the evolution of the radiation spectra for different gradients of undulator tapering are examined. It is found that a strong undulator tapering (˜10 %) provides effective suppression of the sideband instability in the postsaturation regime.

  18. Large angle and high linearity two-dimensional laser scanner based on voice coil actuators

    Science.gov (United States)

    Wu, Xin; Chen, Sihai; Chen, Wei; Yang, Minghui; Fu, Wen

    2011-10-01

    A large angle and high linearity two-dimensional laser scanner with an in-house ingenious deflection angle detecting system is developed based on voice coil actuators direct driving mechanism. The specially designed voice coil actuators make the steering mirror moving at a sufficiently large angle. Frequency sweep method based on virtual instruments is employed to achieve the natural frequency of the laser scanner. The response shows that the performance of the laser scanner is limited by the mechanical resonances. The closed-loop controller based on mathematical model is used to reduce the oscillation of the laser scanner at resonance frequency. To design a qualified controller, the model of the laser scanner is set up. The transfer function of the model is identified with MATLAB according to the tested data. After introducing of the controller, the nonlinearity decreases from 13.75% to 2.67% at 50 Hz. The laser scanner also has other advantages such as large deflection mirror, small mechanical structure, and high scanning speed.

  19. High-dimensional neural network potentials for solvation: The case of protonated water clusters in helium

    Science.gov (United States)

    Schran, Christoph; Uhl, Felix; Behler, Jörg; Marx, Dominik

    2018-03-01

    The design of accurate helium-solute interaction potentials for the simulation of chemically complex molecules solvated in superfluid helium has long been a cumbersome task due to the rather weak but strongly anisotropic nature of the interactions. We show that this challenge can be met by using a combination of an effective pair potential for the He-He interactions and a flexible high-dimensional neural network potential (NNP) for describing the complex interaction between helium and the solute in a pairwise additive manner. This approach yields an excellent agreement with a mean absolute deviation as small as 0.04 kJ mol-1 for the interaction energy between helium and both hydronium and Zundel cations compared with coupled cluster reference calculations with an energetically converged basis set. The construction and improvement of the potential can be performed in a highly automated way, which opens the door for applications to a variety of reactive molecules to study the effect of solvation on the solute as well as the solute-induced structuring of the solvent. Furthermore, we show that this NNP approach yields very convincing agreement with the coupled cluster reference for properties like many-body spatial and radial distribution functions. This holds for the microsolvation of the protonated water monomer and dimer by a few helium atoms up to their solvation in bulk helium as obtained from path integral simulations at about 1 K.

  20. A novel three-dimensional and high-definition flexible scope.

    Science.gov (United States)

    Nishiyama, Kenichi; Natori, Yoshihiro; Oka, Kazunari

    2014-06-01

    Recent high-tech innovations in digital surgical technology have led to advances in three-dimensional (3D) and high-definition (HD) operating scopes. We introduce a novel 3D-HD flexible surgical scope called "3D-Eye-Flex" and evaluate its utility as an alternative to the operating microscope. The 3D-Eye-Flex has a 15 mm long 3D-HD scope-head with a 15 mm outer diameter, a focus distance of 18-100 mm and 80° angle of view. Attached to a 615-mm-long flexible bellows, 3D-Eye-Flex can be easily fixed to the operating table. Microsurgical dissection of wet brain tissue and drilling a skull base model were performed under the scope while using the 3D-HD video monitor. This scope system provided excellent illumination and image quality during the procedures. A large depth of field with stereoscopic vision had a greater advantage over using an operating microscope. 3D-Eye-Flex was easy to manipulate and provided an abundance of space above the operative field. Surgeons felt comfortable while working and could easily shift the position of the scope. This novel 3D-HD flexible scope is an effective alternative to the operating microscope as a new surgeon's eye and will be suitable for digital image-based surgery with further refinement.

  1. Highly sensitive three-dimensional interdigitated microelectrode for microparticle detection using electrical impedance spectroscopy

    International Nuclear Information System (INIS)

    Chang, Fu-Yu; Chen, Ming-Kun; Jang, Ling-Sheng; Wang, Min-Haw

    2016-01-01

    Cell impedance analysis is widely used for monitoring biological and medical reactions. In this study, a highly sensitive three-dimensional (3D) interdigitated microelectrode (IME) with a high aspect ratio on a polyimide (PI) flexible substrate was fabricated for microparticle detection (e.g. cell quantity detection) using electroforming and lithography technology. 3D finite element simulations were performed to compare the performance of the 3D IME (in terms of sensitivity and signal-to-noise ratio) to that of a planar IME for particles in the sensing area. Various quantities of particles were captured in Dulbecco’s modified Eagle medium and their impedances were measured. With the 3D IME, the particles were arranged in the gap, not on the electrode, avoiding the noise due to particle position. For the maximum particle quantities, the results show that the 3D IME has at least 5-fold higher sensitivity than that of the planar IME. The trends of impedance magnitude and phase due to particle quantity were verified using the equivalent circuit model. The impedance (1269 Ω) of 69 particles was used to estimate the particle quantity (68 particles) with 98.6% accuracy using a parabolic regression curve at 500 kHz. (paper)

  2. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models.

    Directory of Open Access Journals (Sweden)

    Cemal Cagatay Bilgin

    Full Text Available BioSig3D is a computational platform for high-content screening of three-dimensional (3D cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i morphogenesis of a panel of human mammary epithelial cell lines (HMEC, and (ii heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation.

  3. Three-Dimensional Triplet Tracking for LHC and Future High Rate Experiments

    CERN Document Server

    Schöning, Andre

    2014-10-20

    The hit combinatorial problem is a main challenge for track reconstruction and triggering at high rate experiments. At hadron colliders the dominant fraction of hits is due to low momentum tracks for which multiple scattering (MS) effects dominate the hit resolution. MS is also the dominating source for hit confusion and track uncertainties in low energy precision experiments. In all such environments, where MS dominates, track reconstruction and fitting can be largely simplified by using three-dimensional (3D) hit-triplets as provided by pixel detectors. This simplification is possible since track uncertainties are solely determined by MS if high precision spatial information is provided. Fitting of hit-triplets is especially simple for tracking detectors in solenoidal magnetic fields. The over-constrained 3D-triplet method provides a complete set of track parameters and is robust against fake hit combinations. The triplet method is ideally suited for pixel detectors where hits can be treated as 3D-space poi...

  4. Dimensional stability under wet curing of mortars containing high amounts of nitrates and phosphates

    International Nuclear Information System (INIS)

    Benard, P.; Cau Dit Coumes, C.; Garrault, S.; Nonat, A.; Courtois, S.

    2008-01-01

    Investigations were carried out in order to solidify in cement some aqueous streams resulting from nuclear decommissioning processes and characterized by a high salinity (300 g/L), as well as important concentrations of nitrate (150-210 g/L) and phosphate ions (0-50 g/L). Special attention was paid to the influence of these compounds on the dimensional variations under wet curing of simulated solidified waste forms. The length changes of mortars containing nitrate salts only (KNO 3 , NaNO 3 ) were shown to be governed by a concentration effect which involved osmosis: the higher their concentration in the mixing solution, the higher the swelling. The expansion of mortars containing high amounts of phosphates (≥ 30 g/L in the mixing solution) was preceded by a shrinkage which increased with the phosphate concentration, and which could be suppressed by seeding the cement used with hydroxyapatite crystals. This transitory shrinkage was attributed to the conversion into hydroxyapatite of a precursor readily precipitated in the cement paste after mixing

  5. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    Science.gov (United States)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  6. Three-dimensional triplet tracking for LHC and future high rate experiments

    International Nuclear Information System (INIS)

    Schöning, A

    2014-01-01

    The hit combinatorial problem is a main challenge for track reconstruction and triggering at high rate experiments. At hadron colliders the dominant fraction of hits is due to low momentum tracks for which multiple scattering (MS) effects dominate the hit resolution. MS is also the dominating source for hit confusion and track uncertainties in low energy precision experiments. In all such environments, where MS dominates, track reconstruction and fitting can be largely simplified by using three-dimensional (3D) hit-triplets as provided by pixel detectors. This simplification is possible since track uncertainties are solely determined by MS if high precision spatial information is provided. Fitting of hit-triplets is especially simple for tracking detectors in solenoidal magnetic fields. The over-constrained 3D-triplet method provides a complete set of track parameters and is robust against fake hit combinations. Full tracks can be reconstructed step-wise by connecting hit triplet combinations from different layers, thus heavily reducing the combinatorial problem and accelerating track linking. The triplet method is ideally suited for pixel detectors where hits can be treated as 3D-space points. With the advent of relatively cheap and industrially available CMOS-sensors the construction of highly granular full scale pixel tracking detectors seems to be possible also for experiments at LHC or future high energy (hadron) colliders. In this paper tracking performance studies for full-scale pixel detectors, including their optimisation for 3D-triplet tracking, are presented. The results obtained for different types of tracker geometries and different reconstruction methods are compared. The potential of reducing the number of tracking layers and - along with that - the material budget using this new tracking concept is discussed. The possibility of using 3D-triplet tracking for triggering and fast online reconstruction is highlighted

  7. High-resolution three-dimensional imaging and analysis of rock falls in Yosemite valley, California

    Science.gov (United States)

    Stock, Gregory M.; Bawden, G.W.; Green, J.K.; Hanson, E.; Downing, G.; Collins, B.D.; Bond, S.; Leslar, M.

    2011-01-01

    We present quantitative analyses of recent large rock falls in Yosemite Valley, California, using integrated high-resolution imaging techniques. Rock falls commonly occur from the glacially sculpted granitic walls of Yosemite Valley, modifying this iconic landscape but also posing signifi cant potential hazards and risks. Two large rock falls occurred from the cliff beneath Glacier Point in eastern Yosemite Valley on 7 and 8 October 2008, causing minor injuries and damaging structures in a developed area. We used a combination of gigapixel photography, airborne laser scanning (ALS) data, and ground-based terrestrial laser scanning (TLS) data to characterize the rock-fall detachment surface and adjacent cliff area, quantify the rock-fall volume, evaluate the geologic structure that contributed to failure, and assess the likely failure mode. We merged the ALS and TLS data to resolve the complex, vertical to overhanging topography of the Glacier Point area in three dimensions, and integrated these data with gigapixel photographs to fully image the cliff face in high resolution. Three-dimensional analysis of repeat TLS data reveals that the cumulative failure consisted of a near-planar rock slab with a maximum length of 69.0 m, a mean thickness of 2.1 m, a detachment surface area of 2750 m2, and a volume of 5663 ?? 36 m3. Failure occurred along a surfaceparallel, vertically oriented sheeting joint in a clear example of granitic exfoliation. Stress concentration at crack tips likely propagated fractures through the partially attached slab, leading to failure. Our results demonstrate the utility of high-resolution imaging techniques for quantifying far-range (>1 km) rock falls occurring from the largely inaccessible, vertical rock faces of Yosemite Valley, and for providing highly accurate and precise data needed for rock-fall hazard assessment. ?? 2011 Geological Society of America.

  8. GPU Implementation of High Rayleigh Number Three-Dimensional Mantle Convection

    Science.gov (United States)

    Sanchez, D. A.; Yuen, D. A.; Wright, G. B.; Barnett, G. A.

    2010-12-01

    Although we have entered the age of petascale computing, many factors are still prohibiting high-performance computing (HPC) from infiltrating all suitable scientific disciplines. For this reason and others, application of GPU to HPC is gaining traction in the scientific world. With its low price point, high performance potential, and competitive scalability, GPU has been an option well worth considering for the last few years. Moreover with the advent of NVIDIA's Fermi architecture, which brings ECC memory, better double-precision performance, and more RAM to GPU, there is a strong message of corporate support for GPU in HPC. However many doubts linger concerning the practicality of using GPU for scientific computing. In particular, GPU has a reputation for being difficult to program and suitable for only a small subset of problems. Although inroads have been made in addressing these concerns, for many scientists GPU still has hurdles to clear before becoming an acceptable choice. We explore the applicability of GPU to geophysics by implementing a three-dimensional, second-order finite-difference model of Rayleigh-Benard thermal convection on an NVIDIA GPU using C for CUDA. Our code reaches sufficient resolution, on the order of 500x500x250 evenly-spaced finite-difference gridpoints, on a single GPU. We make extensive use of highly optimized CUBLAS routines, allowing us to achieve performance on the order of O( 0.1 ) µs per timestep*gridpoint at this resolution. This performance has allowed us to study high Rayleigh number simulations, on the order of 2x10^7, on a single GPU.

  9. High-resolution three-dimensional mapping of semiconductor dopant potentials

    DEFF Research Database (Denmark)

    Twitchett, AC; Yates, TJV; Newcomb, SB

    2007-01-01

    Semiconductor device structures are becoming increasingly three-dimensional at the nanometer scale. A key issue that must be addressed to enable future device development is the three-dimensional mapping of dopant distributions, ideally under "working conditions". Here we demonstrate how a combin......Semiconductor device structures are becoming increasingly three-dimensional at the nanometer scale. A key issue that must be addressed to enable future device development is the three-dimensional mapping of dopant distributions, ideally under "working conditions". Here we demonstrate how...... a combination of electron holography and electron tomography can be used to determine quantitatively the three-dimensional electrostatic potential in an electrically biased semiconductor device with nanometer spatial resolution....

  10. Profiling stem cell states in three-dimensional biomaterial niches using high content image informatics.

    Science.gov (United States)

    Dhaliwal, Anandika; Brenner, Matthew; Wolujewicz, Paul; Zhang, Zheng; Mao, Yong; Batish, Mona; Kohn, Joachim; Moghe, Prabhas V

    2016-11-01

    materials relies on technologies that can sensitively discern cell response dynamics to biomaterials, while capturing cell-to-cell heterogeneity and preserving cellular native phenotypes. In this study, we illustrate the application of a novel high content image informatics platform to classify emergent human mesenchymal stem cell (hMSC) phenotypes in a diverse range of 3-D biomaterial scaffolds with high sensitivity and precision, and track cell responses to varied external stimuli. A major in silico innovation is the proposed image profiling technology based on unique three dimensional textural signatures of a mechanoreporter protein within the nuclei of stem cells cultured in 3-D scaffolds. This technology will accelerate the pace of high-fidelity biomaterial screening. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  11. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    Science.gov (United States)

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM

  12. AucPR: an AUC-based approach using penalized regression for disease prediction with high-dimensional omics data.

    Science.gov (United States)

    Yu, Wenbao; Park, Taesung

    2014-01-01

    It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.

  13. Peripheral Vasculature: High-Temporal- and High-Spatial-Resolution Three-dimensional Contrast-enhanced MR Angiography1

    Science.gov (United States)

    Haider, Clifton R.; Glockner, James F.; Stanson, Anthony W.; Riederer, Stephen J.

    2009-01-01

    Purpose: To prospectively evaluate the feasibility of performing high-spatial-resolution (1-mm isotropic) time-resolved three-dimensional (3D) contrast material–enhanced magnetic resonance (MR) angiography of the peripheral vasculature with Cartesian acquisition with projection-reconstruction–like sampling (CAPR) and eightfold accelerated two-dimensional (2D) sensitivity encoding (SENSE). Materials and Methods: All studies were approved by the institutional review board and were HIPAA compliant; written informed consent was obtained from all participants. There were 13 volunteers (mean age, 41.9; range, 27–53 years). The CAPR sequence was adapted to provide 1-mm isotropic spatial resolution and a 5-second frame time. Use of different receiver coil element sizes for those placed on the anterior-to-posterior versus left-to-right sides of the field of view reduced signal-to-noise ratio loss due to acceleration. Results from eight volunteers were rated independently by two radiologists according to prominence of artifact, arterial to venous separation, vessel sharpness, continuity of arterial signal intensity in major arteries (anterior and posterior tibial, peroneal), demarcation of origin of major arteries, and overall diagnostic image quality. MR angiographic results in two patients with peripheral vascular disease were compared with their results at computed tomographic angiography. Results: The sequence exhibited no image artifact adversely affecting diagnostic image quality. Temporal resolution was evaluated to be sufficient in all cases, even with known rapid arterial to venous transit. The vessels were graded to have excellent sharpness, continuity, and demarcation of the origins of the major arteries. Distal muscular branches and the communicating and perforating arteries were routinely seen. Excellent diagnostic quality rating was given for 15 (94%) of 16 evaluations. Conclusion: The feasibility of performing high-diagnostic-quality time-resolved 3D

  14. Quasi-two-dimensional metallic hydrogen inside di-phosphide at high pressure

    International Nuclear Information System (INIS)

    Degtyarenko, N N; Mazur, E A

    2016-01-01

    The method of mathematical modelling was used for the calculation of the structural, electronic, phononic, and other characteristics of various normal phases of phosphorus hydrides with stoichiometry PH k . It was shown that the di-phosphine may form 2D lattice of the metallic hydrogen in it, stabilized by phosphorus atoms under high hydrostatic pressure. The resulting structure with the elements of H-P-H has a locally stable (or metastable) phonon spectrum. The properties of di-phosphine were compared with the properties of similar structures such as the sulphur hydrides. (paper)

  15. Towards automatic metabolomic profiling of high-resolution one-dimensional proton NMR spectra

    Energy Technology Data Exchange (ETDEWEB)

    Mercier, Pascal; Lewis, Michael J.; Chang, David, E-mail: dchang@chenomx.com [Chenomx Inc (Canada); Baker, David [Pfizer Inc (United States); Wishart, David S. [University of Alberta, Department of Computing Science and Biological Sciences (Canada)

    2011-04-15

    Nuclear magnetic resonance (NMR) and Mass Spectroscopy (MS) are the two most common spectroscopic analytical techniques employed in metabolomics. The large spectral datasets generated by NMR and MS are often analyzed using data reduction techniques like Principal Component Analysis (PCA). Although rapid, these methods are susceptible to solvent and matrix effects, high rates of false positives, lack of reproducibility and limited data transferability from one platform to the next. Given these limitations, a growing trend in both NMR and MS-based metabolomics is towards targeted profiling or 'quantitative' metabolomics, wherein compounds are identified and quantified via spectral fitting prior to any statistical analysis. Despite the obvious advantages of this method, targeted profiling is hindered by the time required to perform manual or computer-assisted spectral fitting. In an effort to increase data analysis throughput for NMR-based metabolomics, we have developed an automatic method for identifying and quantifying metabolites in one-dimensional (1D) proton NMR spectra. This new algorithm is capable of using carefully constructed reference spectra and optimizing thousands of variables to reconstruct experimental NMR spectra of biofluids using rules and concepts derived from physical chemistry and NMR theory. The automated profiling program has been tested against spectra of synthetic mixtures as well as biological spectra of urine, serum and cerebral spinal fluid (CSF). Our results indicate that the algorithm can correctly identify compounds with high fidelity in each biofluid sample (except for urine). Furthermore, the metabolite concentrations exhibit a very high correlation with both simulated and manually-detected values.

  16. Towards automatic metabolomic profiling of high-resolution one-dimensional proton NMR spectra

    International Nuclear Information System (INIS)

    Mercier, Pascal; Lewis, Michael J.; Chang, David; Baker, David; Wishart, David S.

    2011-01-01

    Nuclear magnetic resonance (NMR) and Mass Spectroscopy (MS) are the two most common spectroscopic analytical techniques employed in metabolomics. The large spectral datasets generated by NMR and MS are often analyzed using data reduction techniques like Principal Component Analysis (PCA). Although rapid, these methods are susceptible to solvent and matrix effects, high rates of false positives, lack of reproducibility and limited data transferability from one platform to the next. Given these limitations, a growing trend in both NMR and MS-based metabolomics is towards targeted profiling or “quantitative” metabolomics, wherein compounds are identified and quantified via spectral fitting prior to any statistical analysis. Despite the obvious advantages of this method, targeted profiling is hindered by the time required to perform manual or computer-assisted spectral fitting. In an effort to increase data analysis throughput for NMR-based metabolomics, we have developed an automatic method for identifying and quantifying metabolites in one-dimensional (1D) proton NMR spectra. This new algorithm is capable of using carefully constructed reference spectra and optimizing thousands of variables to reconstruct experimental NMR spectra of biofluids using rules and concepts derived from physical chemistry and NMR theory. The automated profiling program has been tested against spectra of synthetic mixtures as well as biological spectra of urine, serum and cerebral spinal fluid (CSF). Our results indicate that the algorithm can correctly identify compounds with high fidelity in each biofluid sample (except for urine). Furthermore, the metabolite concentrations exhibit a very high correlation with both simulated and manually-detected values.

  17. High-resolution two-dimensional liquid chromatography analysis of key linker drug intermediate used in antibody drug conjugates.

    Science.gov (United States)

    Venkatramani, C J; Huang, Shu Rong; Al-Sayah, Mohammad; Patel, Ila; Wigman, Larry

    2017-10-27

    In this manuscript, the application of high-resolution sampling (HRS) two-dimensional liquid chromatography (2D-LC) in the detailed analysis of key linker drug intermediate is presented. Using HRS, selected regions of the primary column eluent were transferred to a secondary column with fidelity enabling qualitative and quantitative analysis of linker drugs. The primary column purity of linker drug intermediate ranged from 88.9% to 94.5% and the secondary column purity ranged from 99.6% to 99.9%, showing lot-to-lot variability, significant differences between the three lots, and substantiating the synthetic and analytical challenges of ADCs. Over 15 impurities co-eluting with the linker drug intermediate in the primary dimension were resolved in the secondary dimension. The concentrations of most of these impurities were over three orders of magnitude lower than the linker drug. Effective peak focusing and high-speed secondary column analysis resulted in sharp peaks in the secondary dimension, improving the signal-to-noise ratios. The sensitivity of 2D-LC separation was over five fold better than conventional HPLC separation. The limit of quantitation (LOQ) was less than 0.01%. Many peaks originating from primary dimension were resolved into multiple components in the complementary secondary dimension, demonstrating the complexity of these samples. The 2D-LC was highly reproducible, showing good precision between runs with%RSD of peak areas less than 0.1 for the main component. The absolute difference in the peak areas of impurities less than 0.1% were within ±0.01% and for impurities in the range of 0.1%-0.3%, the absolute difference were ±0.02%, which are comparable to 1D-LC. The overall purity of the linker drug intermediate was determined from the product of primary and secondary column purity (HPLC Purity=%peak area of main component in the primary dimension×%peak area of main component in the secondary dimension). Additionally, the 2D-LC separation enables

  18. High-precision two-dimensional atom localization via quantum interference in a tripod-type system

    International Nuclear Information System (INIS)

    Wang, Zhiping; Yu, Benli

    2014-01-01

    A scheme is proposed for high-precision two-dimensional atom localization in a four-level tripod-type atomic system via measurement of the excited state population. It is found that because of the position-dependent atom–field interaction, the precision of 2D atom localization can be significantly improved by appropriately adjusting the system parameters. Our scheme may be helpful in laser cooling or atom nanolithography via high-precision and high-resolution atom localization. (letter)

  19. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang

    2017-10-27

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  20. Biomarker identification and effect estimation on schizophrenia –a high dimensional data analysis

    Directory of Open Access Journals (Sweden)

    Yuanzhang eLi

    2015-05-01

    Full Text Available Biomarkers have been examined in schizophrenia research for decades. Medical morbidity and mortality rates, as well as personal and societal costs, are associated with schizophrenia patients. The identification of biomarkers and alleles, which often have a small effect individually, may help to develop new diagnostic tests for early identification and treatment. Currently, there is not a commonly accepted statistical approach to identify predictive biomarkers from high dimensional data. We used space Decomposition-Gradient-Regression method (DGR to select biomarkers, which are associated with the risk of schizophrenia. Then, we used the gradient scores, generated from the selected biomarkers, as the prediction factor in regression to estimate their effects. We also used an alternative approach, classification and regression tree (CART, to compare the biomarker selected by DGR and found about 70% of the selected biomarkers were the same. However, the advantage of DGR is that it can evaluate individual effects for each biomarker from their combined effect. In DGR analysis of serum specimens of US military service members with a diagnosis of schizophrenia from 1992 to 2005 and their controls, Alpha-1-Antitrypsin (AAT, Interleukin-6 receptor (IL-6r and Connective Tissue Growth Factor (CTGF were selected to identify schizophrenia for males; and Alpha-1-Antitrypsin (AAT, Apolipoprotein B (Apo B and Sortilin were selected for females. If these findings from military subjects are replicated by other studies, they suggest the possibility of a novel biomarker panel as an adjunct to earlier diagnosis and initiation of treatment.